Apple · Seattle

AIML - ML Engineer, Safety & Red Teaming

(f/m/d) · 1/8/2025

Description

- Develop models, tools, metrics, and datasets for assessing and evaluating the safety of generative models over the model deployment lifecycle - Develop models, models, and tools to interpret and explain failures in language and diffusion models - Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products - Prototype, implement, and evaluate new ML models and algorithms for red teaming LLMs

Qualifications

  • Strong engineering skills and experience in writing production-quality code in Python, Swift or other programming languages
  • Background in generative models, natural language processing, LLMs, or diffusion models
  • Experience with failure analysis, quality engineering, or robustness analysis for AI/ML based features
  • Experience working with crowd-based annotations and human evaluations
  • Experience working on explainability and interpretation of AI/ML models
  • Work with highly-sensitive content with exposure to offensive and controversial content

Preferred Qualifications

  • BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
  • Proven track record of contributing to diverse teams in a collaborative environment

Benefits

Application

View listing at origin and apply!