Tech Stack
- Python
- JAX and XLA
- Rust / C++
- Spark
Location
The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
- Training trillion parameter neural networks at scale, as well as a variety of smaller specialized models.
- Rapidly implementing the latest state-of-the-art methods from the deep learning literature.
- Innovating new ideas for pretraining and new scaling paradigm.
- Improving pretraining data quality at scale across different modalities.
Ideal Experiences (at least one from below)
- Strong engineering skills with passion to improve different aspects of data and model.
- Expert in ML and large model scaling, familiar with all kinds of scaling laws.
- Familiar with distributed training, multi-GPU neural network training and experience on optimizing ML training efficiency.
- Familiar with state-of-the-art techniques for preparing AI training data.
- Good at organizing and meticulously bookkeeping data across multiple clouds, of multiple modalities, and from many sources.
Interview Process
After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
- Coding assessment in a language of your choice.
- Systems hands-on: Demonstrate practical skills in a live problem-solving session.
- Project deep-dive: Present your past exceptional work to a small audience.
- Meet and greet with the wider team.
Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.
Annual Salary Range
$180,000 - $440,000 USD