
AnthropicAs a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.
Note: The team is based in New York and so we have a preference for candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year