Hume AI is seeking a talented software engineer with experience in backend web services and ML infrastructure to advance our core mission: using the world’s most advanced technology for emotion understanding to build empathy and goal-alignment into AI. Join us in the heart of New York City, or wherever you are located, and contribute to our endeavor to ensure that AI is guided by human values, the most pivotal challenge (and opportunity) of the 21st century.
About Us
Hume AI is dedicated to building artificial intelligence that is directly optimized for human well-being. We have just raised a Series B funding round and are launching a new flagship product, an empathic AI assistant that can be built into any application.
Where other AI companies see only words, our API can see (and hear) the other half of human communication: subtle tones of voice, word emphasis, facial expression, and more, along with the reactions of listeners. These behaviors reveal our preferences—whether we find things interesting or boring; satisfying or frustrating; funny, eloquent, or dubious. We call learning from these signals “reinforcement learning from human expression” (RLHE). AI models trained with RLHE can serve as better question answerers, copywriters, tutors, call center agents, and more, even in text-only interfaces.
Our goal is to enable a future in which technology draws on an understanding of human emotional expression to better serve human goals. As part of our mission, we also conduct groundbreaking scientific research, publish in leading scientific journals like Nature, and support a non-profit, The Hume Initiative, that has released the first concrete ethical guidelines for empathic AI (www.thehumeinitiative.org). You can learn more about us on our website (https://hume.ai/) and read about us in Forbes, Axios, and The Washington Post.
About the Role
We are looking for an experienced and motivated engineer with experience in backend web services and ML infrastructure to help Hume AI empower developers around the world. In this role you will help us integrate cutting edge AI models into services and toolkits for researchers and developers. You will work closely with research scientists and frontend engineers to build new capabilities into the Hume platform, and you will have the opportunity to take part in a wide range of engineering initiatives across the ML lifecycle, including model training, evaluation, and deployment at scale.
Requirements
-
Expertise in the Python ecosystem and popular ML libraries and tools (e.g. PyTorch, JAX, TensorFlow, XGBoost, sklearn, pandas, numpy)
-
Understanding of core ML concepts including model architecture, training, and evaluation
-
Expertise working with storage and compute on a cloud platform (e.g.Google Cloud, AWS)
-
Deep understanding of modern deployment strategies utilizing cloud technologies. (e.g. blue/green, canary deployments, etc)
-
Experience using service deployment tooling (e.g. Kubernetes, Docker, Argo)
-
Excellent communication and collaboration skills
Bonus
-
Experience writing backend services in multiple languages (e.g. Kotlin, Go, Rust, Java, C++)
-
Knowledge of one or several MLOps platforms for experiment tracking or model training (e.g. Sagemaker, Vertex, Weights and Biases)
-
Familiarity with distributed training and compute tools like Dask, Ray, and Spark
-
Experience working at the intersection of machine learning research and engineering
Application Note
Please apply only to the position that best aligns with your qualifications. If you submit multiple applications or have applied within the past 6 months, only your initial submission will be considered.