About the Role
We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment. This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure. The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime—including close partnership with facility operations to address physical infrastructure impacts. If you thrive in lightning-fast, distributed environments and are passionate about leveraging automation to drive efficiency, this is an opportunity to make a significant impact on our infrastructure's resilience and scalability.
In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities. By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.
The primary objective of this team is to mitigate downtime and minimize impact to end-users from both scheduled and unscheduled maintenance, as well as events affecting onsite data centers. This is achieved through proactive automation, robust observability, and integrated software-physical reliability strategies, ensuring our AI infrastructure remains resilient, scalable, and at the cutting edge of innovation.
Responsibilities
- Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.**Key Insight: Drawing on software engineering (SWE) best practices, this involves full lifecycle automation—from planning and development to ongoing maintenance—to ensure scripts and services evolve with infrastructure changes, preventing drift and reducing manual interventions that could introduce errors. We value adaptability to new tools and paradigms in the fast-evolving AI space.
- Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers—open to innovative stacks beyond traditional ones like ELK.**Key Insight: Effective observability not only detects issues early but also correlates software metrics with physical factors (e.g., power draw or temperature spikes), enabling predictive analytics that can forecast potential failures and mitigate impacts before they affect end-users. Emphasis on exploring cutting-edge alternatives to stay at the forefront of technology.
- Collaborate with cross-functional teams—including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management)—to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).**Key Insight: Cross-team partnerships are essential for holistic reliability; for instance, integrating facility data into reliability tools can automate responses to environmental events, such as rerouting workloads during cooling system maintenance, thereby minimizing scheduled downtime's ripple effects on AI training pipelines. This role encourages broad skill sets from diverse technical backgrounds to foster innovation.
- Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.**Key Insight: By applying SWE rigor to troubleshooting, team members can create reusable diagnostic tools that accelerate resolution, turning unscheduled events (e.g., hardware faults) into opportunities for system hardening and reducing overall end-user impact through targeted SLAs that prioritize critical AI services. We seek versatile problem-solvers who adapt to bleeding-edge challenges.
- Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.**Key Insight: Optimization efforts, grounded in SWE maintenance practices, can yield significant gains—such as 20-30% improvements in resource efficiency—while ensuring systems remain secure and resilient against evolving threats in distributed AI environments. Flexibility in tool choices is key to handling rapid tech advancements.
- Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.**Key Insight: In multi-site setups, network insights allow for automated failover mechanisms that handle both digital and physical disruptions, ensuring seamless continuity for end-users during events like fiber cuts or power outages. This attracts candidates from varied networking and systems backgrounds to drive forward-thinking solutions.
- Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.**Key Insight: Blameless postmortems foster a learning culture, where insights from incidents (scheduled or unscheduled) inform automation enhancements, ultimately driving down recurrence rates and protecting end-user experience in high-stakes AI operations. We prioritize growth-minded individuals who embrace evolving practices.
- Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.**Key Insight: Knowledge transfer amplifies team impact; by documenting SWE-driven automation patterns and encouraging exploration of bleeding-edge tools, mentors can scale reliability practices across the organization, ensuring long-term mitigation of downtime risks while broadening appeal to diverse talent pools.
Required Qualifications
- Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).
- 5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.
- Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.
- Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.
- Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).
- Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.
- Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.
- Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.
- Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.
- Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).
Preferred Qualifications
- 7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/MLtraininginfrastructure environments with multi-data center setups.
- Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.
- Proficiency in Rust for systems programming and performance-critical components.
- Direct experience integrating software reliability tools with physical data center infrastructure (e.g., power, cooling, environmental monitoring, facility controls) and automating responses to physical events.
- Exposure to advanced or innovative observability stacks beyond traditional tools (e.g., exploring cutting-edge alternatives for metrics, logs, and tracing).
- Experience building automated remediation, fault tolerance, disaster recovery, capacity planning, or predictive failure detection systems.
- Background in optimizing Linux-based systems for AI workloads, GPU clusters, or high-throughput compute environments.
- Demonstrated success reducing downtime, MTTR, or improving resource efficiency (e.g., through automation or observability) in high-stakes production settings.
- Prior work with bare-metal provisioning, data center interconnects, or hybrid/multi-site failover mechanisms.
- Mentoring experience, strong documentation skills, and a track record of fostering knowledge sharing and automation culture.
- Comfort with rapid technology adaptation in fast-evolving domains like AI infrastructure.