We are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
We are looking for people who mix strong technical ML skills, boundless curiosity and a constant desire to make better products. You’ll be part of Luma’s applied research team and work directly on mission critical work streams utilizing thousands of GPUs.
Responsibilities
- Implement cutting-edge generative AI features for inclusion in our product
- Work with Research and Product to fine-tune models, including dataset creation, model fine-tuning and evaluation
- Build tools & methods for evaluating our models and identify & implement solutions
- Train and run ML classifiers & embeddings to categorize by data quality and other attributes
Experience
- Strong generalist Python skills including significant experience with PyTorch.
- Recent experience in, and understanding of, visual AI, including diffusion models and transformers (note: can be from professional work, academic research experience, or hobby work with open source models like Stable Diffusion)
- Passion for experimenting with multi-modal generative AI, including familiarity with recent research work and ideas for implementing research into usable products.
- Good to have experience with cluster orchestration tools like SLURM
- Good to have experience with high performance large scale ML systems (>100 GPUs)
- Please note this role is not meant for recent grads.
The pay range for this position in California is $175,000 - $250,000 yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan.
Your application is reviewed by real people.