About the role
As part of the Anthropic security department, the compliance team owns understanding security and AI safety expectations, as established by regulators, customers and (nascent) industry norms (which we also seek to influence). The compliance team uses this understanding to provide direction to internal partners on the priorities of security and safety requirements they must meet. The compliance team assures regulators and customers that those expectations are met by earning security credentials and responding to direct inquiry about Anthropics security program from auditors, customers and partners.
This opportunity is unique, as we work to secure today’s most novel and valuable asset types, we must build a new kind of compliance program, assuring the safety of artificial intelligence capabilities.
Responsibilities:
- Plan and lead engagements with independent assessors to earn certifications and attestations important to Anthropic customers.
- Understand the breadth of Anthropic’s security capabilities and how those capabilities implement common security frameworks, such as NIST 800-171, ISO 27001, and SOC2.
- Drive programs to improve the ease and rigor of Anthropic’s compliance to its security controls and standards.
- Write, update and enact policies capturing security and AI safety requirements.
- Support maintenance of Anthropic’s system of controls through audit, recordkeeping, and communication.
You may be a good fit if you:
- Have three to five years of experience in a role with similar responsibilities
- Are familiar with audit planning and procedures for compliance with ISO certifications, SOC attestations, FedRAMP, CMMC and similar assessments
- Write clear and useful security documentation
- Thrive in a fast-paced and growing organization
- Are comfortable organizing time-bounded task management of delegated work streams across a diverse organization
Strong candidates may also have experience with:
- AWS / GCP security capabilities, especially identity and access management features
- Understanding development of large language models (LLMs)
- Implementing automated enforcement of security controls
Deadline to apply: None. Applications will be reviewed on a rolling basis.