About the role:
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.
The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.
As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.
Strong candidates may also have experience with:
-
High-performance, large-scale distributed systems
-
Implementing and deploying machine learning systems at scale
-
Load balancing, request routing, or traffic management systems
-
LLM inference optimization, batching, and caching strategies
-
Kubernetes and cloud infrastructure (AWS, GCP)
-
Python or Rust
You may be a good fit if you:
-
Have significant software engineering experience, particularly with distributed systems
-
Are results-oriented, with a bias towards flexibility and impact
-
Pick up slack, even if it goes outside your job description
-
Want to learn more about machine learning systems and infrastructure
-
Thrive in environments where technical excellence directly drives both business results and research breakthroughs
-
Care about the societal impacts of your work
Representative projects across the org:
-
Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators
-
Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads
-
Building production-grade deployment pipelines for releasing new models to millions of users
-
Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage
-
Contributing to new inference features (e.g., structured sampling, prompt caching)
-
Supporting inference for new model architectures
-
Analyzing observability data to tune performance based on real-world production workloads
-
Managing multi-region deployments and geographic routing for global customers
Deadline to apply: None. Applications will be reviewed on a rolling basis.