About Horizons
The Horizons team leads Anthropic's reinforcement learning research and development, playing a critical role in advancing our AI systems. We've contributed to all Claude models, with significant impacts on the autonomy and coding capabilities of Claude 3.5 and 3.7 Sonnet. Our work spans several key areas:
- Developing systems that enable models to use computers effectively
- Advancing code generation through reinforcement learning
- Pioneering fundamental RL research for large language models
- Building scalable RL infrastructure and training methodologies
- Enhancing model reasoning capabilities
We collaborate closely with Anthropic's alignment and frontier red teams to ensure our systems are both capable and safe. We partner with the applied production training team to bring research innovations into deployed models, and work hand-in-hand with dedicated RL engineering teams to implement our research at scale. The Horizons team sits at the intersection of cutting-edge research and engineering excellence, with a deep commitment to building high-quality, scalable systems that push the boundaries of what AI can accomplish.
About the Role
As an Infrastructure & Runtime Engineer on the Horizons team, you will build and maintain the foundational systems that enable our AI research. You'll work closely with researchers and engineers to develop robust infrastructure for large language model training, focusing on code execution environments, data pipelines, and performance optimization. Your work will directly support advances in reinforcement learning, agentic AI capabilities, and secure model evaluation systems.
Representative projects:
- Design and implement high-performance data pipelines for processing large-scale code datasets with an emphasis on reliability and reproducibility
- Build and maintain secure sandboxed execution environments using virtualization technologies like GVisor and Firecracker
- Develop infrastructure for reinforcement learning training environments, balancing security requirements with performance needs
- Optimize resource utilization across our distributed computing infrastructure through profiling, benchmarking, and systems-level improvements
- Collaborate with researchers to translate their requirements into scalable, production-grade systems for AI experimentation
You may be a good fit if you:
- Are proficient in Python and async/concurrent programming with frameworks like Trio
- Have experience with container technologies and virtualization systems
- Possess strong systems programming skills and understand performance optimization
- Enjoy solving complex infrastructure challenges at scale
- Have experience with data pipeline development and ETL processes
- Care deeply about code quality, testing, and performance
- Communicate effectively with both technical and research-focused team members
- Are passionate about developing safe and beneficial AI systems
Strong candidates may have:
- Experience with cloud infrastructure and Kubernetes orchestration
- Familiarity with infrastructure-as-code tools (Terraform, Pulumi, etc.)
- Experience contributing to open-source projects in systems or infrastructure
- Knowledge of Rust and/or C++ for performance-critical components
- Experience implementing security controls for code execution
- Comfort engaging with ML research concepts and translating them to engineering requirements
Strong candidates need not have:
- Formal certifications or education credentials
- Experience with LLMs, reinforcement learning, or machine learning research before
Deadline to apply: None. Applications will be reviewed on a rolling basis.