About the role:
As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.
Responsibilities:
- Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge
- Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model
- Create and maintain comprehensive honesty benchmarks and evaluation frameworks
- Implement search and retrieval-augmented generation (RAG) systems to ground model outputs in verified information
- Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses
- Design and implement prompting pipelines to generate data that improves model accuracy and honesty
- Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims
- Create tools to help human evaluators efficiently assess model outputs for accuracy
You may be a good fit if you:
- Have an MS/PhD in Computer Science, ML, or related field
- Possess strong programming skills in Python
- Have industry experience with language model finetuning and classifier training
- Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy
- Care about AI safety and the accuracy and honesty of both current and future AI systems
- Have experience in data science or the creation and curation of datasets for finetuning LLMs
- An understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs
Strong candidates may also have:
- Published work on hallucination prevention, factual grounding, or knowledge integration in language models
- Experience with retrieval-augmented generation (RAG) or similar fact-grounding techniques
- Background in developing confidence estimation or calibration methods for ML models
- A track record of creating and maintaining factual knowledge bases
- Familiarity with RLHF specifically applied to improving model truthfulness
- Worked with crowd-sourcing platforms and human feedback collection systems
- Experience developing evaluations of model accuracy or hallucinations
Join us in our mission to ensure advanced AI systems behave reliably and ethically while staying aligned with human values.