OpenAI · San Francisco/New York City · Hybrid

Data Scientist, Integrity Measurement

2/18/2026

Description

The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability

The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. As the systems that address some of our most severe usage harms become more mature, we’re adding data scientists to help us measure robustly the prevalence of these problems and the quality of our response to them.

About the Role

We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses.

This role is based out of our San Francisco or New York office and may involve resolving urgent escalations outside of normal work hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

  • Own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals.

  • Develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, which may necessarily include off-platform indicators or other non-standard datasets.

  • Build metrics that can be used for goaling or A/B tests when prevalence or other top line metrics are not suitable.

  • Own dashboards and metrics reporting for harm verticals.

  • Conduct analyses and generate insights that inform improvements to review, detection, or enforcement, and that influence roadmaps.

  • Optimise LLM prompts for the purpose of measurement.

  • Collaborate w/ other safety teams to understand key safety concerns and create relevant policies that will support safety needs.

  • Provide metrics for leadership and external reporting.

  • Develop automation to scale yourself, leveraging our agentic products.

Qualifications

  • Are a senior DS with trust and safety experience that can drive measurement direction.

  • Have deep statistics skills, specifically around sampling methods and prevalence estimation of complicated problem areas (ideally activity- rather than content-based).

  • Have experience working with severe and sensitive harm areas like child safety or violence.

  • Are an excellent communicator, and have strong cross-functional collaboration skills.

  • Are capable in data programming languages (R or python, SQL).

  • (Ideally) have experience with AI harms or leveraging AI for measurement.

Application

View listing at origin and apply!

Fast-track your ML job hunt :

Be the first to hear about new sota jobs + exclusive salary research + career cheatsheets.