OpenAI · San Francisco · Hybrid

Data Scientist, Integrity

8/29/2025

Description

The Applied team safely brings OpenAI's technology to the world. We released ChatGPT; Plugins; DALL·E; and the APIs for GPT-4, GPT-3, embeddings, and fine-tuning. We also operate inference infrastructure at scale. There's a lot more on the immediate horizon.

Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. ChatGPT is a prime example of what is currently possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth.

The Scaled Abuse team works within our Applied Engineering organization identifying and responding to fraudsters on our platform. We are looking for a data scientist with anti fraud & abuse experience to help architect and build our next-generation anti-abuse systems.

About the Role

The Platform Abuse team protects OpenAI’s products from abuse. In the data scientist role, you will be responsible for discovering and mitigating new types of misuse, and scaling our detection techniques and processes. Platform Abuse is an especially exciting area since we believe most of the ways our technologies will be abused haven’t even been invented yet.

This role is based in our San Francisco HQ. We offer relocation assistance to new employees.

In this role, you will:

  • Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience

  • Work closely with finance, security, product, research, and trust & safety operations to holistically combat fraudulent and abusive actors on our system

  • Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries

  • Utilize GPT-5 and future models to more effectively combat fraud and abuse

Qualifications

  • Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams

  • Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems

  • Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python.

  • Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives.

  • Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning

  • 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.

Application

View listing at origin and apply!