About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Mamba, FlexGen, Petals, Mixture of Agents, and RedPajama.
Role Overview
The Inference Research team is dedicated to building the next generation of efficient, scalable, and reliable serving systems for large foundation models, directly contributing to the mission of advancing open and transparent AI. Our work operates at the critical intersection of cutting-edge model architectures, high-performance systems engineering, and deep hardware optimization. We focus on co-designing software, algorithms, and models to significantly lower the cost and latency of modern AI systems.
As a research intern, you will dive into the complexities of distributed inference, compiler-aware optimization, and novel inference-time computation strategies (such as speculative decoding and phase-aware execution). You will be tasked with co-designing and implementing cross-layer optimizations across models, systems, and hardware, with a focus on areas like KV cache design and large-scale serving architectures.
Projects aim to unlock unprecedented performance and scale for foundation models, enabling faster serving, larger model deployment (e.g., Mixture-of-Experts), and robust, reproducible evaluation under realistic serving workloads.
Responsibilities
- Design and conduct rigorous experiments to validate hypotheses
- Communicate the plans, progress, and results of projects to the broader team
- Document findings in scientific publications and blog posts
Requirements
- Currently pursuing a final year of Bachelor's, Master's, or Ph.D. degree in Computer Science, Electrical Engineering, or a related field
- Strong knowledge of Machine Learning and Deep Learning fundamentals
- Experience with deep learning frameworks (PyTorch, JAX, etc.)
- Strong programming skills in Python
- Familiarity with Transformer architectures and recent developments in foundation models
Preferred Qualifications
- Prior research experience in foundation models, efficient machine learning, or ML systems.
- Publications at leading conferences in machine learning or systems (i.e., MLSys, ICLR).
- Experience with CUDA programming (for kernel development)
- Understanding of model optimization techniques and hardware acceleration approaches
- Contributions to open-source machine learning projects
Internship Details
- Duration: ~12 weeks (Summer 2026)
- Location: San Francisco
Internship Program Details
Our summer internship program spans over 12 weeks where you’ll have the opportunity to work with industry-leading engineers building a cloud from the ground up and possibly contribute to influential open source projects. Our internship dates are May 18th to August 7th or June 15th to September 4th.
Compensation
We offer competitive compensation, housing stipends, and other competitive benefits. The estimated US hourly rate for this role is $58-63/hr. Our hourly rates are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy