Data Platform at OpenAI owns the foundational data stack powering critical product, research, and analytics workflows. We operate some of the largest Spark compute fleets in production; design, and build data lakes and metadata systems on Iceberg and Delta with a vision toward exabyte-scale architecture; run high throughput streaming platforms on Kafka and Flink; provide orchestration with Airflow; and support ML feature engineering tooling such as Chronon. Our mission is to deliver reliable, secure, and efficient data access at scale and accelerate intelligent, AI assisted data workflows.
Join us to build and operate these core platforms that underpin OpenAI products, research, and analytics.
We’re not just scaling infrastructure – we’re redefining how people interact with data. Our vision includes intelligent interfaces and AI-assisted workflows that make working with data faster, more reliable, and more intuitive.
About the Role
This role focuses on building and operating data infrastructure that supports massive compute fleets and storage systems, designed for high performance and scalability. You’ll help design, build, and operate the next generation of data infrastructure at OpenAI. You will scale and harden big data compute and storage platforms, build and support high-throughput streaming systems, build and operate low latency data ingestions, enable secure and governed data access for ML and analytics, and design for reliability and performance at extreme scale.
You will take full lifecycle ownership: architecture, implementation, production operations, and on-call participation.
You’ve supported Spark, Kafka, Flink, Airflow, Trino, or Iceberg as platforms. You’re well-versed in infrastructure tooling like Terraform, experienced in debugging large-scale distributed systems, and excited about solving data infrastructure problems in the AI space.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Accelerate company productivity by empowering your fellow engineers & teammates with excellent data tooling and systems
Collaborate with product, research and analytics teams to build the technical foundations capabilities that unlock new features and experiences
Own the reliability of the systems you build, including participation in an on-call rotation for critical incidents
Have 4+ years in data infrastructure engineering OR
Have 4+ years in infrastructure engineering with a strong interest in data
Take pride in building and operating scalable, reliable, secure systems
Are comfortable with ambiguity and rapid change
Have an intrinsic desire to learn and fill in missing skills, and an equally strong talent for sharing learnings clearly and concisely with others
This role is exclusively based in our San Francisco HQ. We offer relocation assistance to new employees.