About Us:
Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare.
The company believes that a safe LLM can dramatically improve healthcare accessibility and
health outcomes in the world by bringing deep healthcare expertise to every human. No other
technology has the potential to have this level of global impact on health.
Why Join Our Team:
Innovative mission: We are creating a safe, healthcare-focused LLM that can transform health outcomes on a global scale.
Visionary leadership: Hippocratic AI was co-founded by CEO Munjal Shah alongside physicians, hospital administrators, healthcare professionals, and AI researchers from top institutions, including El Camino Health, Johns Hopkins, Washington University in St. Louis, Stanford, Google, Meta, Microsoft, and NVIDIA.
Strategic investors: Raised $137 million from top investors including General Catalyst, Andreessen Horowitz, Premji Invest, SV Angel, NVentures (Nvidia Venture Capital), and Greycroft.
Team and expertise: We are working with top experts in healthcare and artificial intelligence to ensure the safety and efficacy of our technology.
For more information, visit www.HippocraticAI.com.
We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA, unless explicitly noted otherwise in the job description.
Position Overview:
We are seeking a skilled ML Infrastructure Engineer to help design, build, and maintain a robust orchestration platform for managing a diverse set of Large Language Models (LLMs). The ideal candidate will have hands-on experience with infrastructure orchestration tools such as Kubernetes and Terraform, as well as a strong understanding of multi-cloud environments. This role offers the opportunity to work on cutting-edge technologies and play a key part in scaling our AI infrastructure.
Key Responsibilities: Infrastructure Development & Maintenance:
• Build and maintain infrastructure for deploying and managing LLMs at scale.
• Implement automated processes using Kubernetes and Infrastructure as Code (IAC) tools like Terraform.
Orchestration Platform Support:
• Contribute to the development and optimization of an orchestration platform for managing a heterogeneous set of LLMs.
• Monitor and troubleshoot issues in the platform to ensure high availability and performance.
Cloud Integration:
• Deploy and manage resources across multiple cloud platforms (e.g., AWS, Azure, Google Cloud).
• Optimize cloud resource usage for cost efficiency and scalability.
Collaboration:
• Work closely with ML engineers and DevOps teams to ensure smooth deployment and operation of AI models.
• Provide feedback on system designs and recommend improvements to infrastructure workflows.
Performance Monitoring:
• Implement tools and processes to monitor system health, identify bottlenecks, and improve model lifecycle management.
• Perform capacity planning to support growing infrastructure needs.
Qualifications:
Technical Skills:
• 3-5 years of experience in infrastructure engineering, DevOps, or a related field.
• Proficiency with Kubernetes, Terraform, and other IAC tools.
• Familiarity with multi-cloud environments and cloud-native services (e.g., AWS Lambda, Google Cloud Run, Azure Functions).
• Programming skills in Python, Bash, or a similar language for automation and scripting.
• Basic understanding of ML workflows and frameworks like TensorFlow, PyTorch, or Hugging Face is a plus.Soft Skills: • Strong problem-solving skills and attention to detail.
• Good communication and collaboration abilities to work effectively with cross-functional teams.
• Eagerness to learn new technologies and improve existing systems.
Education & Experience: • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience).