About the Role
As a Full-Stack Software Engineer in RL, you'll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude's next generation depends on the quality of the data we train it on — and the systems you build are what make that data possible.
You'll own product surfaces end-to-end — from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day. You don't need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast.
This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn't typing — it's judgment, taste, and the ability to react to what researchers need next. You'll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you'll do it in a loop that closes in hours and days, not quarters or months.
Anthropic's Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We've contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models. Our work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.
The RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model. Our engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible — from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.
What You'll Do
- Build and extend web platforms for RL environment creation, management, and quality review — including environment configuration, versioning, and validation workflows
- Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction
- Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early
- Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking
- Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure
- Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels
- Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks
- Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products
You May Be a Good Fit If You
- Have strong software engineering fundamentals and real full-stack range — you're comfortable owning a surface from database schema to frontend
- Are proficient in Python and a modern web stack (React, TypeScript, or similar)
- Have a track record of shipping systems that solved a hard problem, not just shipped on time — e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible
- Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket
- Have found yourself wondering "why isn't this moving faster?" in previous roles — and then have done something about it
- Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers
- Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work
- Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before
- Care about Anthropic's mission to build safe, beneficial AI and want your work to contribute directly to it
Strong Candidates May Also Have
- Built data collection, labeling, or annotation platforms — ideally ones that had to scale across many vendors or many task types
- Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows
- Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines
- Familiarity with LLM training, fine-tuning, or evaluation workflows
- Experience with async Python (Trio, asyncio) or high-throughput API design
- Background in dashboards, monitoring, or observability tooling
- Experience working directly with external vendors or partners on technical integrations
- A background that isn't a straight line — e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope
Representative Projects
- Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks
- Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation
- Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training
- Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments
- Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training
- Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress