Scale's LLM post-training platform team builds our internal distributed framework for large language model training. The platform powers MLEs, researchers, data scientists, and operators for fast and automatic training and evaluation of LLMs. It also serves as the underlying training framework for the data quality evaluation pipeline.
Scale is uniquely positioned at the heart of the field of AI as an indispensable provider of training and evaluation data and end-to-end solutions for the ML lifecycle. You will work closely with Scale’s ML teams and researchers to build the foundation platform which supports all our ML research and development works. You will be building and optimizing the platform to enable our next generation LLM training, inference and data curation.
If you are excited about shaping the future AI via fundamental innovations, we would love to hear from you!
You will:
- Build, profile and optimize our training and inference framework.
- Collaborate with ML and research teams to accelerate their research and development, and enable them to develop the next generation of models and data curation.
- Research and integrate state-of-the-art technologies to optimize our ML system.
Ideally you’d have:
- Passionate about system optimization
- Experience with multi-node LLM training and inference
- Experience with developing large-scale distributed ML systems
- Experience with post-training methods like RLHF/RLVR and related algorithms like PPO/GRPO etc.
- Strong software engineering skills, proficient in frameworks and tools such as CUDA, Pytorch, transformers, flash attention, etc. 
- Strong written and verbal communication skills to operate in a cross functional team environment.
Nice to haves:
- Demonstrated expertise in post-training methods and/or next generation use cases for large language models including instruction tuning, RLHF, tool use, reasoning, agents, and multimodal, etc.