About the Role:
Building a great AI assistant is only half the battle – knowing whether it's actually great is the other half. Our team owns the measurement and quality layer that make Glean's Assistant and Agents reliably better over time: evaluation pipelines, quality evalsets, LLM-powered judges, agent observability, and the tooling engineers use to understand what changed and why. It's a rare combination of infrastructure engineering, applied ML, and direct product impact. If you care deeply about quality and want to build the systems that make it measurable, this role is for you.
You will:
- Design and curate evaluation datasets – sampling strategies, query diversity, and golden sets that give reliable, representative coverage of real assistant behavior.
- Build and maintain large-scale evaluation pipelines that measure assistant quality across thousands of real user queries.
- Build LLM-powered judges that score metrics like correctness, completeness, and response quality, and align them against human judgment.
- Evaluate new models and product changes before they ship – providing the quality signal that gates launches and prevents regressions.
- Build observability infrastructure for AI agents: trace enrichment, data pipelines, and dashboards that make assistant behavior inspectable.
- Close the loop between quality measurement and improvement using eval results, customer feedback, and techniques like automated prompt iteration to help drive concrete gains in assistant behavior.
- Collaborate with engineers across the company to make evals a first-class part of how we ship.
About you:
- 2+ years of software engineering experience with strong coding skills.
- Strong backend fundamentals in Go and Python; comfortable with distributed data pipelines.
- Experience working with LLM evaluation, reinforcement learning from human feedback, natural language processing, or other large systems involving machine learning.
- Analytically rigorous – you think carefully about what offline metrics actually predict about real user experience.
- Thrive in a customer-focused, tight-knit and cross-functional environment - being a team player and willing to take on whatever is most impactful for the company
- You care about quality – not just in the systems you build, but in the product you're helping measure and improve.
Location:
- This role is hybrid (3-4 days a week in one of our SF Bay Area offices)
Compensation & Benefits:
The standard base salary range for this position is $200,000 - $300,000 annually. Compensation offered will be determined by factors such as location, level, job-related knowledge, skills, and experience. Certain roles may be eligible for variable compensation, equity, and benefits.
We offer a comprehensive benefits package including competitive compensation, Medical, Vision, and Dental coverage, generous time-off policy, and the opportunity to contribute to your 401k plan to support your long-term goals. When you join, you'll receive a home office improvement stipend, as well as an annual education and wellness stipends to support your growth and wellbeing. We foster a vibrant company culture through regular events, and provide healthy lunches daily to keep you fueled and focused.
We are a diverse bunch of people and we want to continue to attract and retain a diverse range of people into our organization. We're committed to an inclusive and diverse company. We do not discriminate based on gender, ethnicity, sexual orientation, religion, civil or family status, age, disability, or race.
#LI-HYBRID