Description
We are looking for biological scientists to help build safety and oversight mechanisms for our AI systems. As a Safeguards Biological Safety Research Scientist, you will apply your technical skills to design and develop our safety systems which detect harmful behaviors and to prevent misuse by sophisticated threat actors. You will be at the forefront of defining what responsible AI safety looks like in the biological domain, working across research, policy, and engineering to translate complex biosecurity concepts into concrete technical safeguards. This is a unique opportunity to shape how frontier AI models handle dual-use biological knowledge—balancing the tremendous potential of AI to accelerate legitimate life sciences research while preventing misuse by sophisticated threat actors.
In this role, you will:
- Design and execute capability evaluations ("evals") to assess the capabilities of new models
- Collaborate closely with internal and external threat modeling experts to develop training data for our safety systems, and with ML engineers to train these safety systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate researchers
- Analyze safety system performance in traffic, identifying gaps and proposing improvements
- Develop rigorous stress-testing of our safeguards against evolving threats and product surfaces
- Partner with Research, Product, and Policy teams to ensure biological safety is embedded throughout the model development lifecycle
- Contribute to external communications, including model cards, blog posts, and policy documents related to biological safety
- Monitor emerging technologies for their potential to contribute to new risks and new mitigation strategies, and strategically address these