Overview
This person will lead the Research arm of the Frontier Red Team at Anthropic. The Research team is composed of domain experts in cyber, autonomy, biology, and national security. The Research team’s role is to experiment with the most frontier capabilities and risks from models, and then inform the company, government, labs, and civil society.
The Research team has a particular focus on informing governments' and industry’s understanding of current and future national security-relevant capabilities. It also designs evaluations and mitigation strategies for our Responsible Scaling Policy, while the Production team scales, implements, and runs them. Together, we determine the AI Safety Level (ASL) of Anthropic’s models and what to do about these capabilities.
This team lead’s goal is to lead the team in researching whether enhancing models’ cyber, autonomy, bio, and national security capabilities generates evidence that dramatically alters our understanding of risks. They will manage 5 - 10 people this year.
Requirements
- Experience managing a top tier technical team to quickly conduct ambitious technical research.
- High-level experience in communicating and working with policy principals and the national security community.
- Understanding of evaluations on frontier AI models.
- A bias towards action, speed, and simplicity.
- Located in San Francisco.
Nice to have
- A technical background, such as a PhD w/ published works in Machine Learning, or a background building customer-facing applications.
- Strong understanding of and novel thoughts about our mission, the RSP, and coordination on the path to AGI.