Tech Stack
- Kubernetes
- Pulumi
- Rust and Go
- Flux / ArgoCD
Location
The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
- Operating some of the world’s largest GPU supercomputing clusters for both AI training and serving production models.
- Implement IaC best practices, enhancing deployment pipelines, and ensuring robust, secure service delivery across our production environments.
- Working with both on-premise clusters and cloud providers.
- Help with security best practices for internal researchers and live external traffic.
Ideal Experiences
- Writing scalable and highly available containerized applications in Rust.
- Managing compute fleets with Pulumi, Terraform, Ansible, or other stateful automation libraries.
Interview Process
After submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to an initial interview (45 minutes - 1 hour) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
- Coding assessment in a language of your choice.
- Systems design: Translate high-level requirements into a scalable, fault-tolerant service.
- Systems hands-on: Demonstrate practical skills in a live problem-solving session.
- Project deep-dive: Present your past exceptional work to a small audience.
- Meet and greet with the wider team.
Our goal is to finish the main process within one week. All interviews will be conducted via Google Meet.
Annual Salary Range
$180,000 - $370,000 USD