Role Overview
As a member of the Product Public Policy team, you'll ensure Anthropic's model development practices inform emerging AI governance frameworks and build trust with policy stakeholders. You own the policy dimensions of every major model release, translate our Responsible Scaling Policy into frameworks for transparency legislation, work closely with our Labs team on policy-informed research/product development, and lead initiatives that demonstrate our commitment to openness and responsible development.
In this role you will:
-
Translate RSP components into emerging policy frameworks
-
Own policy engagement related to Claude's Constitution, translating our model values and behavioral frameworks into governance standards and demonstrating how principled AI development can inform regulation
-
Drive policy strategy and positioning for major model releases, including how to position developments in light of voluntary commitments, policy pressures; external engagement and input
-
Partner with research teams to translate technical advances into policy narratives for regulators, AI Safety Institutes, and the technical policy community
-
Develop frameworks showing how our technical practices (interpretability, safety evals, development methodology) tie to public policy priorities
-
Represent Anthropic in standards bodies and regulatory consultations on model development and governance
You may be a good fit if you:
-
8+ years in AI/tech policy with technical depth to engage credibly on model development and evaluation
-
Strong understanding of LLM development, evaluation methodologies, and AI safety
-
Track record translating technical practices into policy frameworks and regulatory strategies
-
Proven cross-functional work between technical teams and policy stakeholders
-
Expert stakeholder management with experience working across policy, technical, academia, and civil society
-
Experience with AI governance frameworks (EU AI Act, transparency requirements, safety standards)