About the Role
Scale’s mission is to develop reliable AI systems for the world's most important decisions.
Within the Enterprise BU, we build production-grade GenAI applications for the world’s largest companies. For these organizations, the stakes are high: if an application isn’t useful, accurate, and safe, it cannot go into production.
As a Strategic Projects Lead (SPL) for Enterprise Evaluations, you will oversee the evaluations that determine if an application is ready for the real world. You will define "what good looks like" for complex GenAI apps, curate the data needed to measure performance, and serve as one of the final gatekeepers for production readiness.
This is a high-impact role for a technically curious operator who is equally comfortable debating a complex evaluation rubric with an engineer and communicating strategy to Fortune 500 customers. You must be obsessed with the gold standard for AI performance, from the high-level approach to the granular details of data quality.
You Will:
- Partner with enterprise stakeholders and Scale project teams (Applied AI, MLE, Product) to translate business goals into concrete evaluation strategies.
- Co-design the frameworks, rubrics, and "golden datasets", determiningthe right mix of human expertise vs. automated evaluation, designing evaluation strategies that capture the high-signal expert feedback that automated tools often miss.
- Determine the "what, how, and why" of human-in-the-loop data, designing rubrics and scoring frameworks that allow human experts to provide high-signal feedback that automated tools often miss.
- Own operational scoping & execution, converting complex evaluation needs into an executable plan (i.e., everything from staffing and contributor capacity planning to cost/pricing assumptions and final delivery)
- Orchestrate the end-to-end evaluation "engine"; while you won’t personally label data, you will own the execution of data labeling operations including ingest, pipelines, and version control.
- Identify and resolve operational challenges and technical blockers proactively, “unblocking” yourself and the team by anticipating risks before they impact delivery.
- Analyze evaluation results to provide the final, data-driven recommendation on whether an application is ready for production.
- Run open source LLM benchmarks and present insights and recommendations on model performance to engineering teams.
- Act as a "cross-pollinator" for the Enterprise BU, identifying successful frameworks and turning them into repeatable SOPs that can be applied to other areas.
Ideally, You’d Have:
- Strong technical background (ideal to have a degree in computer science and Python knowledge). At a minimum, the role requires the ability to do data analytics using SQL or Python. You should be comfortable leveraging tools to automate tasks, generate synthetic data, or analyze evaluation results.
- 5+ years of professional experience in a high-stakes operational role at a fast-growing tech company, management consulting, or investment banking.
- Strong problem solving capabilities (experience working on operational challenges or as a consultant is a plus).
- Systemic Thinking: You prefer building "the machine" over running manual workarounds. You are excited to build the operational infrastructure required to scale a new function and create methodologies that can be used and evolved over time.
- Research-Adjacent Interest: You enjoy keeping up with the fast-evolving GenAI landscape. You want to understand where human judgment is the gold standard and how to capture it, using that knowledge to improve our evaluation strategies.
- Full-Stack Ownership: You have a proven track record of taking projects from 0 to 1. You have an entrepreneurial mindset and are excited about building things from scratch. You can handle high-level strategy but aren't afraid to get deep into the data to ensure accuracy and reliability.