TPU Kernel Engineer
San Francisco, CA | New York City, NY | Seattle, WA
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization.
You may be a good fit if you:
- Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators
- Are results-oriented, with a bias towards flexibility and impact
- Pick up slack, even if it goes outside your job description
- Enjoy pair programming (we love to pair!)
- Want to learn more about machine learning research
- Care about the societal impacts of your work
Strong candidates may also have experience with:
- High performance, large-scale ML systems
- Designing and implementing kernels for TPUs or other ML accelerators
- Understanding accelerators at a deep level, e.g. a background in computer architecture
- ML framework internals
- Language modeling with transformers
Representative projects:
- Implement low-latency, high-throughput sampling for large language models
- Adapt existing models for low-precision inference
- Build quantitative models of system performance
- Design and implement custom collective communication algorithms
- Debug kernel performance at the assembly level