Adversarially-robust reasoning, coding, and tool-use capabilities under prompt injection and jailbreak attacks.
Adherence to privacy norms, whether or not under adversarial prompting.
Adversarial techniques against generative models through multi-modal inputs.
New model architectures that are secure-by-design against prompt injections.
Detecting attacks in the wild and finding ways to mitigate them
Identify unsolved, impactful privacy & security research problems, inspired by the needs of protecting frontier capabilities. Research novel solutions through related work studies, offline and online experiments, and building prototypes and demos.
Verify the research ideas in the real world by driving and growing collaborations with Gemini teams working in safety, evaluations, and other related areas to land new innovations together.
Amplify the impact by generalizing solutions into reusable libraries and frameworks for protecting Gemini and product models across Google, and by sharing knowledge through publications, open source, and education.
Ph.D. in Computer Science or related quantitative field, or B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.
Self-directed engineer/research scientist who can drive new research ideas from conception, experimentation, to productionisation in a rapidly shifting landscape.
Strong research experience with LLMs and publications in ML security, privacy, safety, or alignment.
Experience with JAX, PyTorch, or similar machine learning platforms.
A track record on landing research impact within multi-team collaborative environments under senior stakeholders.
The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.