Google DeepMind · London

Research Scientist, World Models

11/4/2025

Description

Implement core infrastructure and conduct research to build generative models of the physical world. Solve essential problems to train world simulators at massive scale, develop metrics and scaling laws for physical intelligence, curate and annotate training data, enable real-time interactive generation, and explore new possibilities for impact with the next generation of models. Embrace the bitter lesson and seek simple methods that survive the test of scale, with emphasis on strong systems and infrastructure.

Areas of focus:

  • Infrastructure for large-scale video data pipelines and annotation.
  • Inference optimization and distillation for real-time generation.
  • Scaling law science for video pretraining.
  • Next generation forms of interactivity.
  • Methods for long term memory in world models.
  • Model research that unlocks additional scaling and improved capabilities.

Qualifications

  • Experience with large-scale transformer models and/or large-scale data pipelines.
  • PhD in computer science or machine learning, or equivalent industry experience.
  • Track record of releases, publications, and/or open source projects relating to video generation, world models, multimodal language models, or transformer architectures.
  • Strong systems and engineering skills in deep learning frameworks like JAX or PyTorch.
  • Experience building training codebases for large-scale video or multimodal transformers.
  • Expertise optimizing efficiency of distributed training systems and/or inference systems.
  • Experience with distillation of diffusion models.

Application

View listing at origin and apply!