Description
We are looking for researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for models to interact with private user data. In this role, you'll design and implement privacy-preserving techniques, audit our current techniques, and set the direction for how Anthropic handles privacy more broadly.
Responsibilities:
- Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process
- Develop privacy-first training algorithms and techniques
- Develop evaluation and auditing techniques to measure the privacy of training algorithms
- Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy
- Advocate on behalf of our users to ensure responsible handling of all data