As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding large language models (LLMs). Safety, Evaluations and Alignment Lab (SEAL) is Scale’s frontier research effort dedicated to tackling the challenging research problems in evaluation, red teaming, and alignment of advanced AI systems.
We are actively seeking talented researchers to join us in shaping the landscape for safety and transparency for the entire AI industry. We support collaborations across the industry and academia and the publication of our research findings.
As a Research Scientist working on Agent Robustness you will work on the fundamental challenges of building AI agents that are safe and aligned with humans. For example, you might:
- Research the science of AI agent capabilities and methodologies for benchmarking them;
- Design and build harnesses to test AI agents’ tendency to take harmful actions when pressured to do so by users or tricked into doing so by elements of their environment;
- Design and build exploits and mitigations for new and unique failure modes that arise as AI agents gain affordances like coding, web browsing, and computer use;
- Characterize and design mitigations for potential failure modes or broader risks of systems involving multiple interacting AI agents.
Ideally you’d have:
- Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance.
- Practical experience conducting technical research collaboratively, with proficiency in frameworks like Pytorch, Jax, or Tensorflow. You should also be adept at interpreting research literature and quickly turning new ideas into prototypes.
- A track record of published research in machine learning, particularly in generative AI.
- At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development.
- Strong written and verbal communication skills to operate in a cross functional team.
Nice to have:
- Hands-on experience with open source LLM fine-tuning or involvement in bespoke LLM fine-tuning projects using Pytorch/Jax.
- Experience in crafting evaluations or a background in data science roles related to LLM technologies.
- Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment.
Our research interviews are crafted to assess candidates' skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organizational culture. We will not ask any LeetCode-style questions.