
Fast-track your ML job hunt :
The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to the world. We seek to learn from deployment and broadly distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. We aim to make our innovative tools globally accessible, transcending geographic, economic, or platform barriers. Our commitment is to facilitate the use of AI to enhance lives, fostered by rigorous insights into how people use our products.
About the Role
A systems research internship is for people who love the real-world intersection of systems-engineering and research: you’ll investigate a hard systems problem, build something meaningful, and measure it carefully. The goal is practical impact—making Applied Systems better: more efficient, more scalable, and more reliable.
OpenAI is currently recruiting for candidates interested in a 13-week, paid, in-person internship based in our San Francisco office during Summer 2026. In some cases, it may be extended for an additional 13 weeks (for a total of up to 26 weeks), based on team needs, candidate interest, and performance.
In this role, you will typically focus on improving real systems in areas like:
Distributed systems & storage (throughput, latency, consistency, durability)
Compute & scheduling (GPU/accelerator utilization, job orchestration, queuing)
Performance engineering (profiling, bottlenecks, scalability, capacity planning)
Reliability & observability (fault tolerance, monitoring, incident learning)
Networking & data pipelines (data movement, caching, streaming efficiency)
Systems for ML (training/inference performance, evaluation infrastructure, tooling)
Most projects involve some of these steps:
Defining a clear hypothesis (“we think X will reduce latency by Y under Z”)
Instrumenting existing production systems, gathering metrics and detailed analysis to validate the hypothesis
Building or modifying a real system (prototype or production-quality improvements when appropriate)
Running experiments/benchmarks and analyzing results
Communicating tradeoffs and recommendations clearly
Publishing the research work in technical journals and conferences
Currently pursuing a PhD in Computer Science, Computer Engineering, or relevant technical field
Proficiency with Coding in c++, Java, python, rust, etc
Doing ongoing research on systems topics such as DL/ML, information retrieval, systems security and cryptography, databases, networking, distributed systems, and compilers, etc
Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines
Fast-track your ML job hunt :