Anthropic is working on frontier AI research that has the potential to transform how humans and machines interact. As our models grow more powerful, securing them from exfiltration or misuse becomes critically important. In this role, you'll be helping to build and institute controls to lock down our AI training pipelines, apply security architecture patterns built for adversarial environments, and secure our model weights as we scale model capabilities.
Responsibilities
- Design and implement secure-by-default controls as they relate to our software supply chain, AI model training systems, and deployment environments.
- Perform security architecture reviews, threat modeling, and vulnerability assessments to identify and remediate risks.
- Support Anthropic's responsible disclosure and bug bounty programs and participate in the Security Engineering team's on-call rotation.
- Accelerate the development of Anthropic's security engineers through mentorship and coaching, and contribute to company building activities like interviewing.
- Help build greater security awareness across the organization and coach engineers on secure coding practices.
- Lead and contribute towards large efforts such as building multi-party authorization for AI-critical infrastructure, helping to reduce sensitive production access, and securing build pipelines.
You may be a good fit if you
- Have 8+ years of software development experience with a security focus.
- Have experience applying security best practices, like the principle of least privilege and defense-in-depth, to complex systems.
- Are proficient in languages like Rust, Python, Javascript/Typescript.
- Have a track record of launching successful security initiatives and working cross-functionally to enact such changes.
- Are passionate about making AI systems more safe, interpretable, and aligned with human values.
Strong candidates may also
- Have experience supporting fast-paced startup engineering teams
- Care about AI safety risk scenarios
Deadline to apply: None. Applications will be reviewed on a rolling basis.