About the team
The Tool Use Team within Research is responsible for making Claude the world's most capable, safe, reliable, and efficient model for tool use and agentic applications. The team focuses on the foundational layer - solving core problems such as tool use safety (e.g. prompt injection robustness), tool call accuracy, long horizon & complex tool use workflow, large scale & dynamic tools, and tool use efficiency. These are foundations to the majority of Anthropic’s customers as well as internal teams building specific agentic applications such as Claude for Chrome, Computer Use, Claude Code, Search.
About the role
We're looking for Research Engineers/Scientists to help us advance the frontier of safe tool use. With tool use adoption accelerating rapidly across our platform, the next generation requires even more breakthrough research to enable us to scale responsibly: for example, training Claude to be extremely robust against sophisticated prompt injection, preventing data exfiltration attempts through tool misuse, defending against adversarial attacks in realistic multi-turn agent conversations, and ensuring safety when agents operate autonomously for longer horizons with access to a large number of tools.
You'll collaborate with a diverse group of researchers and engineers to advance safe tool use in Claude. You'll own the full research lifecycle—from identifying fundamental limitations to implementing solutions that ship in production models. This work is critical for derisking our model’s increasing capabilities and empowering Claude to more autonomously assist users.
Note: For this role, we conduct all interviews in Python.
Responsibilities:
- Design and implement novel and scalable reinforcement learning methodologies that push the state of the art of tool use safety
- Define and pursue research agendas that push the boundaries of what's possible
- Build rigorous, realistic evaluations that capture the complexity of real-world tool use safety challenges
- Ship research advances that directly impact and protect millions of users
- Collaborate with other safety research (e.g. Safeguards, Alignment Science), capabilities research, and product teams to drive fundamental breakthroughs in safety, and work with teams to ship these into production
- Design, implement, and debug code across our research and production ML stacks
- Contribute to our collaborative research culture through pair programming, technical discussions, and team problem-solving
You may be a good fit if you:
- Passionate about our safety mission
- Are driven by real-world impact and excited to see research ship in production
- Have strong machine learning research/applied-research experience, or a strong quantitative background such as physics, mathematics, or quantitative finance research
- Write clean, reliable code and have solid software engineering skills
- Communicate complex ideas clearly to diverse audiences
- Are hungry to learn and grow, regardless of years of experience
Strong candidates may also have one or more of the following:
- Experience with tool use/agentic safety, trust & safety, or security
- Experience with reinforcement learning techniques and environments
- Experience with language model training, fine-tuning or evaluation
- Experience building AI agents or autonomous systems
- Published influential work in relevant ML areas, especially around LLM safety & alignment
- Deep expertise in a specialized area (e.g., RL, security, or mathematical foundations), even if still developing breadth in adjacent areas
- Experience shipping features or working closely with product teams
- Enthusiasm for pair programming and collaborative research