Google DeepMind · Mountain View/San Francisco

Research Engineer, Media Integrity

1/29/2026

Description

We are seeking a talented and highly motivated Research Engineer to join our team on the forefront of multimodal information quality and media integrity. You will be working alongside a multi-faceted world-class team of researchers and engineers to conduct cutting-edge research and advance the next generation of multimodal AI systems that help info seeking, deliberation and user understanding.  You will focus on developing and implementing algorithms for assessing the quality and reliability of the investigations, and detecting anomalous patterns of activity.

Key Responsibilities

  • Drive projects by defining key research questions
  • Design, implement, and evaluate experiments to provide clear answers
  • Collaborate with other researchers on joint projects
  • Contribute to real-world impact, by landing your research in Google products and services.
  • Publish research findings in top academic conferences and journals
  • Stay up-to-date with the latest advancements in the field

Qualifications

  • You are a passionate and talented researcher with a strong theoretical foundation, a proven ability to conduct impactful research, and a passion for delivering real-world impact via responsible AI. In order to set you up for success as a Research Engineer at Google DeepMind,  we look for the following skills and experience:
  • PhD or Master's degree in Computer Science, Statistics, or a related field.
  • Strong publication record in top machine learning conferences or journals (e.g., NeurIPS, ICML, ICLR, CVPR, ECCV) or demonstrated experience in relevant engineering roles.
  • Expertise in one or more of the following areas: social impact of AI, reinforcement learning, multimodal agents, computer vision, user ranking/assessment algorithms, or anomaly/outlier detection techniques.
  • Experience developing and implementing scalable machine learning pipelines.
  • Passion for promoting high information quality
  • Track record of publications and relevant experience in user understanding or RL.
  • Experience with training, evaluating, or interpreting large language models, and integrating tool use
  • Proven ability to design and execute independent research projects
  • Experience with model evaluation, including fairness and bias considerations.
  • Expertise in developing models that assess user interactions and behavior patterns
  • Experience with anomaly detection in user activity

Benefits

The US base salary range for this full-time position is between $215,000 - $250,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Application

View listing at origin and apply!