Apple · Seattle

AIML - Machine Learning Engineer, Siri and Information Intelligence

(f/m/d) · 3/21/2025

Description

As a member of our fast-paced group, you’ll have the unique and rewarding opportunity to shape upcoming products from Apple. We are looking for highly motivated machine learning engineers and researchers having strong machine learning and deep learning fundamentals with hands-on experience in fine-tuning deep learning and large language models. This role will have the following responsibilities: - Conduct research and development on state-of-the-art deep learning and large language models for various tasks and applications in Apple’s AI-powered products - Developing, fine-tuning, and evaluating domain-specific Large Language Models for various NLP tasks including summarization, question answering, search relevance/ranking, entity linking and query understanding problems - Conducting applied research to transfer the cutting edge research in generative AI to production ready technologies - Understanding product requirements, translate them into modeling tasks and engineering tasks - Stay up to date with the latest advancements and research in deep learning and large language models

Qualifications

  • Master’s in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
  • Experience working with Deep learning or LLM model development for various NLP tasks including prompt engineering, training data collection and generation, model fine-tuning and model evaluation
  • Experience working with Python and at least one of the deep learning frameworks such as TensorFlow, PyTorch, or JAX.

Preferred Qualifications

  • PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
  • At least 1 year of experience in various state-of-the-art techniques related to LLM fine-tuning in 1 or more of the following areas:
  • Supervised Fine-tuning (SFT) with Rejection Sampling
  • Preference-based fine-tuning techniques (e.g RLHF, Reward model, DPO, PPO, GRPO etc.)
  • Parameter efficient fine-tuning techniques (e.g LoRA)
  • Hallucination reduction and factual accuracy improvements
  • Designing and implementing safety guardrails
  • At least 4 years of experience with large-scale model training, optimization, and deployment
  • One or more scientific publications in various conferences and journals
  • Experience with Question Answering, RAG, and Summarization

Benefits

Application

View listing at origin and apply!