Google DeepMind · New York City

Research Scientist, Frontier AI Safety and Responsibility

18.10.2025

Description

This Research Scientist position is a unique opportunity to lead foundational research on frontier AI safety problems. You will be instrumental in developing the core scientific understanding and techniques that allow us to build and deploy increasingly autonomous and capable AI systems responsibly.

Key responsibilities

  • Conceive, design, and lead research projects to develop novel algorithms for the safety, alignment, and interpretability of advanced AI agents.
  • Drive the full research lifecycle, from theoretical ideation and mathematical formulation to implementation and large-scale empirical validation.
  • Design and conduct rigorous evaluations and red-teaming exercises to discover and mitigate potential risks in next-generation models.
  • Write software to implement research ideas and iterate quickly.
  • Collaborate with world-class research and engineering teams to embed research directly into the AI development lifecycle.
  • Stay at the forefront of the field by engaging with the latest theoretical research in machine learning and AI safety.

Qualifications

  • A PhD in Computer Science, Machine Learning, or a related technical field, or equivalent practical experience.
  • Experience in designing and implementing machine learning algorithms in Python.
  • Hands-on experience with at least one major deep learning framework such as JAX, PyTorch, or TensorFlow.
  • A strong mathematical background and the ability to engage with theoretical research in machine learning.
  • Working experience with modern AI architectures (e.g., Transformers, Diffusion) and a comprehensive knowledge of machine learning fundamentals.
  • A strong interest in ensuring AI development benefits humanity.
  • A strong track record of research, including publications in top-tier peer-reviewed conferences or journals (e.g., NeurIPS, ICML, ICLR).
  • Expertise in one or more of these areas: AI safety, model interpretability (explainable AI),  or AI alignment.
  • Experience leading research projects and engaging in team collaborations to meet ambitious research goals.
  • Experience with large-scale training and evaluation of deep learning models

Benefits

The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Application

View listing at origin and apply!