Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
About Cohere and North
Cohere is revolutionizing enterprise AI with North, an agentic AI platform designed to securely deploy AI agents and automations within organizations' infrastructure. North empowers employees to streamline workflows, automate repetitive tasks, and unlock actionable insights while ensuring data privacy and compliance. North combines cutting-edge generative and search models with customizable integrations to drive productivity and innovation at scale.
Role Overview
We are seeking a Safety Research PM to bridge Cohere's AI safety research and the North product. This role sits at the intersection of model research and product delivery — you'll work directly with Cohere's modeling and safety research teams to understand how our models behave, where they fall short, and how those insights translate into concrete safety features and guardrails within North.
This isn't a traditional PM role. You'll spend as much time reading evaluations and engaging with researchers as you will writing PRDs. The right person is intellectually curious, comfortable with ambiguity, and has the technical depth to engage seriously with model behavior research while also having the product instincts to know what to do with it.
Responsibilities
Serve as the product bridge between Cohere's safety research teams and North, ensuring that findings from model evaluations, red-teaming, and behavioral research translate into product-level guardrails, controls, and safeguards.
Own the safety product roadmap for Cohere and North, prioritizing features based on research findings, observed misuse patterns, evolving threat vectors, and customer requirements.
Partner with modeling teams to scope and interpret safety evaluations — understanding how Cohere’s underlying models behave across adversarial inputs, edge cases, and high-stakes use cases.
Define and drive evaluation frameworks for assessing how safety properties hold up as models and product capabilities evolve, ensuring regressions surface before they reach customers.
Coordinate the development of guardrails and intervention mechanisms — working across research, engineering, and policy to determine where and how safety controls should be implemented within North's product layer.
Monitor the AI safety research landscape — from prompt injection and jailbreaks to emerging misuse patterns in agentic systems — and ensure North's roadmap reflects what the research is surfacing.
Build processes for scaling safety review as North's surface area grows, including how new features get assessed for safety risk before launch.
Requirements
5+ years of product management or research operations experience, with meaningful time working alongside research or ML teams at a technology or AI company.
Technical depth sufficient to engage credibly with safety researchers: you don't need to run evals yourself, but you need to understand what they mean and ask the right questions.
Genuine interest in AI safety and model behavior, including the real-world implications of deploying LLMs in enterprise contexts.
Comfortable operating in ambiguity — safety research surfaces unexpected findings, and this role requires good judgment about what to act on and how fast.
Able to work across researchers, engineers, and product teams and keep everyone aligned without flattening the nuance of what the research is actually saying.
Strong written communicator who can translate complex model behavior findings for non-technical audiences and knows when something needs to be escalated urgently.
Nice-to-Haves
Hands-on experience with LLM evaluation, red-teaming, safety benchmarking, or behavioral research.
Familiarity with AI-specific threat vectors: prompt injection, jailbreaks, RAG poisoning, or misuse patterns in agentic systems.
Background in trust and safety, content policy, or a research-adjacent operational role at a technology company.
Experience building zero-to-one processes in research or safety contexts.
Prior exposure to agentic AI systems and the unique safety challenges introduced by tool use, multi-step reasoning, and autonomous execution.
Why Join Cohere?
Impact: Shape how one of the most widely deployed enterprise AI platforms thinks about and implements safety at the product level.
Innovation: Work directly alongside safety researchers and modeling teams at the frontier of AI behavior research.
Growth: Competitive compensation, equity options, and opportunities for professional development.
Flexibility: Hybrid work model with offices in San Francisco, New York, and Toronto.
Location
Remote or hybrid (New York, Toronto, London, Paris, Zurich).
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work closely with a team on the cutting edge of AI research
🍽 Weekly lunch stipend, in-office lunches & snacks
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for up to 6 months
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
✈️ 6 weeks of vacation (30 working days!)