
Fast-track your ML job hunt :
The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.
Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.
The mission of the Preparedness team is to:
Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society
Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems
Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.
About the Role
We’re hiring a Data Scientist to help build, evaluate, and continuously improve mitigations that prevent extreme harms from AI systems. This role is for an experienced, highly autonomous individual contributor who can take ambiguous problem statements, structure rigorous analyses, and translate findings into actionable product and policy changes.
This position goes beyond “running evals.” You’ll help create mitigation intelligence and monitoring systems that enable OpenAI to detect issues early, measure effectiveness over time, and reduce both over-blocking (unnecessary friction) and under-blocking (missed harm).
Evaluate and improve mitigation systems, including classifiers and detection pipelines across domains (e.g., biosecurity, cybersecurity, and emerging risk areas).
Diagnose false positives and false negatives with deep error analysis, root cause investigation, and clear recommendations for mitigation adjustments.
Build monitoring and measurement frameworks to track mitigation effectiveness over time and across user segments and use cases.
Identify trends in over-blocking vs. under-blocking, quantify customer impact, and propose prioritized interventions.
Develop insights from customer feedback, complaints, and usage patterns to detect shifts in adversarial behavior and system failure modes.
Expand risk monitoring into new areas, including cybersecurity threats and model loss-of-control or sabotage scenarios, in partnership with domain experts.
Communicate results to technical and executive stakeholders with crisp narratives, decision-ready metrics, and clear tradeoffs.
An autonomous operator: you can take a problem statement and independently structure the analysis end-to-end.
Strong at executive-ready communication: concise, clear, and outcome-oriented.
Skilled in turning analysis into productable changes: you’re comfortable influencing across functions to drive mitigation improvements.
Significant experience in data science or applied analytics in high-stakes domains (e.g., security, trust & safety, abuse prevention, fraud, platform integrity, or reliability).
Strong foundations in experimentation, causal thinking, and/or observational inference; ability to design robust measurement under imperfect data.
Fluency in SQL and Python (or equivalent) for analysis, modeling, and building monitoring workflows.
Experience building metrics, dashboards, and operational monitoring that meaningfully changes outcomes (not just reporting).
Track record of driving cross-functional impact with engineering, product, and research partners.
Cybersecurity data science experience (strong preference), including exposure to threat modeling, adversarial dynamics, abuse patterns, or security telemetry.
Experience with classifier evaluation, calibration, thresholding, and error analysis at scale.
Familiarity with detection systems in adversarial settings (e.g., evasion, distribution shift, feedback loops).
Trust & Safety experience is helpful, but not required.
Genuine interest in AI safety, alignment, and catastrophic risk prevention.
Fast-track your ML job hunt :