Anthropic · San Francisco/New York City · Hybrid

[Expression of Interest] Research Scientist/Engineer, Honesty

1/11/2025

Description

As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.

Note: The team is based in New York and so we have a preference for candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year

Responsibilities:

  • Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge
  • Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model
  • Create and maintain comprehensive honesty benchmarks and evaluation frameworks
  • Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems
  • Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses
  • Design and implement prompting pipelines to generate data that improves model accuracy and honesty
  • Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims
  • Create tools to help human evaluators efficiently assess model outputs for accuracy

Qualifications

  • Have an MS/PhD in Computer Science, ML, or related field
  • Possess strong programming skills in Python
  • Have industry experience with language model finetuning and classifier training
  • Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy
  • Care about AI safety and the accuracy and honesty of both current and future AI systems
  • Have experience in data science or the creation and curation of datasets for finetuning LLMs
  • An understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs
  • Published work on hallucination prevention, factual grounding, or knowledge integration in language models
  • Experience with fact-grounding techniques
  • Background in developing confidence estimation or calibration methods for ML models
  • A track record of creating and maintaining factual knowledge bases
  • Familiarity with RLHF specifically applied to improving model truthfulness
  • Worked with crowd-sourcing platforms and human feedback collection systems
  • Experience developing evaluations of model accuracy or hallucinations

Benefits

$315,000 - $340,000 USD

Application

View listing at origin and apply!