About the Team
The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models and systems. As capabilities continue to advance, it is imperative that our approaches to safety continue to improve and scale to address evolving risks. This is important both for ensuring our systems are robust to prevent harmful misuse as well as ensuring potential misalignment cannot cause harm. We are working on these problems in a way that is grounded in our current models and methods but that generalizes to future systems.
We are growing our team to expand our research on methods that will improve safety for AGI and beyond. This will include exploratory research for example, new methods to improve safety common sense and generalizable reasoning, developing new evaluations to elicit or detect misalignment or inner goals of the AI, and new methods to support human oversight of long-running tasks.
About the Role
As a tech lead, you will be responsible for developing our strategy in new directions to address potential harms from misalignment or significant mistakes. This will in practice include:
Setting north star goals and milestones for new research directions, and developing challenging evaluations to track progress.
Personally driving or leading research in new exploratory directions to demonstrate feasibility and scalability of the approaches.
Working horizontally across safety research and related teams to ensure different technical approaches work together to achieve strong safety results.
We’re looking for people who have a strong track record of practical research on safety and alignment, ideally in AI and LLMs, and have led large research efforts in the past.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Set the research directions and strategies to make our AI systems safer, more aligned and more robust.
Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&S, policy and related alignment teams, to ensure that our AI meets the highest safety standards.
Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.
Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.
Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.
You might thrive in this role if you:
Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter
Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.
Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases.
Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
Possess experience in safety work for AI model deployment
Have an in-depth understanding of deep learning research and/or strong engineering skills.
Are a team player who enjoys collaborative work environments.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.