OpenAI · San Francisco · Hybrid

Researcher (Engineer/Scientist), Training Architecture

14.5.2025

Description

OpenAI's Training team is responsible for producing the large language models that power our research, our products, and ultimately bring us closer to AGI. Achieving this goal requires combining deep research into improving our current architecture, datasets and optimization techniques, alongside long-term bets aimed at improving the efficiency and capability of future generations of models. We are responsible for integrating these techniques and producing model artifacts used by the rest of the company, and ensuring that these models are world-class in every respect. Recent examples of artifacts with major contributions from our team include GPT4-Turbo, GPT-4o and o1-mini.

About the Role

As a member of the architecture team, you will push the frontier of architecture development for OpenAI's flagship models, enhancing intelligence, efficiency, and adding new capabilities.

Ideal candidates have a deep understanding of LLM architectures, a sophisticated understanding of model inference, and a hands-on empirical approach. A good fit for this role will be equally happy coming up with a creative breakthrough, investing in strengthening a baseline, designing an eval, debugging a thorny regression, or tracking down a bottleneck.

This role is based in San Francisco. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Design, prototype and scale up new architectures to improve model intelligence

  • Execute and analyze experiments autonomously and collaboratively

  • Study, debug, and optimize both model performance and computational performance

  • Contribute to training and inference infrastructure

Qualifications

  • Have experience landing contributions to major LLM training runs

  • Can thoroughly evaluate and improve deep learning architectures in a self-directed fashion

  • Are motivated by safely deploying LLMs in the real world

  • Are well-versed in the state of the art transformer modifications for efficiency

Application

View listing at origin and apply!