Anthropic · San Francisco · Hybrid

Research Engineer, Interpretability

11/8/2025

Description

When you see what modern language models are capable of, do you wonder, "How do these things work? How can we trust them?"

The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe.

Think of us as doing "neuroscience" of neural networks using "microscopes" we build - or reverse-engineering neural networks like binary programs.

More resources to learn about our work: 

Even if you haven’t worked on interpretability before, the infrastructure expertise is similar to what's needed across the lifecycle of a production language model:

  • Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips
  • Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model's internal activations mid-forward-pass - for example, adding a "steering vector"
  • Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission

The science keeps scaling - and it's now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI.

Responsibilities:

  • Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application
  • Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams
  • Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers
  • Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations
  • Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling

Qualifications

  • Have 5-10+ years of experience building software
  • Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python
  • Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks
  • Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions
  • Prefer fast-moving collaborative projects to extensive solo efforts
  • Are curious about interpretability research and its role in AI safety (though no research experience is required!)
  • Care about the societal impacts and ethics of your work
  • Are comfortable working closely with researchers, translating research needs into engineering solutions.
  • Optimizing the performance of large-scale distributed systems
  • Language modeling fundamentals with transformers
  • High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization
  • Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs
  • Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges
  • Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations
  • Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them
  • Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization
  • Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research)
  • This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis.

Benefits

$315,000 - $560,000 USD

Application

View listing at origin and apply!

Fast-track your ML job hunt :

Be the first to hear about new sota jobs + exclusive salary research + career cheatsheets.