Google DeepMind · Mountain View

Research Scientist, Gemini Safety

3/13/2026

Description

We’re looking for a versatile Research Scientist at ease both with figuring out how to approach new research questions, and the technical implementation of research ideas. Our team focuses on advancing the safety and fairness behavior of state of the art AI models. We drive the development of the foundational technology adopted by numerous product areas including Gemini App, Cloud API, and Search.

Key responsibilities:

  • Post-training / instruction tuning state of the art LLMs, focusing on text-to-text, image/video/audio-to-text modalities and agentic capabilities
  • Exploring data, reasoning and algorithmic solutions to make sure Gemini Models are safe, maximally helpful, and work for everyone.
  • Improve Gemini’s adversarial robustness, with a focus on high-stakes abuse risks.
  • Design and maintain high quality evaluation protocols to assess model behavior gaps and headroom related to safety and fairness.
  • Develop and execute experimental plans to address known gaps, or construct entirely new capabilities
  • Drive innovation and enhance understanding of Supervised Fine Tuning and Reinforcement Learning fine-tuning at scale

Qualifications

  • PhD in Computer Science, a related field, or equivalent practical experience.
  • Significant LLM post-training experience
  • Experience in Reward modeling and Reinforcement Learning for LLMs Instruction tuning
  • Experience with Long-range Reinforcement learning
  • Experience in areas such as Safety, Fairness and Alignment
  • Track record of publications at NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI, UAI
  • Experience taking research from concept to product
  • Experience with collaborating or leading an applied research project
  • Experience with JAX

Application

View listing at origin and apply!

Fast-track your ML job hunt :

Be the first to hear about new sota jobs + exclusive salary research + career cheatsheets.