Conduct high-quality research in areas related to AI safety, ethics, and responsibility.
Design, implement, and run experiments to test new ideas and hypotheses.
Analyze and interpret experimental results, and communicate findings to the team.
Collaborate with cross-functional teams (e.g., Research, Engineering, Policy) to identify and address key challenges in responsible AI.
Contribute to the development of robust and scalable ML systems and tools.
Stay abreast of the latest advancements in AI, machine learning, LLMs, and AI ethics and safety.
Contribute to research papers and publications.
Qualifications
PhD in Computer Science, Machine Learning, or a related technical field, or equivalent practical experience.
3+ years of experience in a research or engineering role.
Proven track record of contributing to complex machine learning projects.
Deep understanding and hands-on experience with state-of-the-art Machine Learning techniques, particularly in Deep Learning and Large Language Models (LLMs).
Strong programming skills, preferably in Python, and experience with common ML frameworks (e.g., TensorFlow, PyTorch, JAX).
Excellent communication and interpersonal skills, with the ability to explain technical concepts to both technical and non-technical audiences.
Master's or PhD in Computer Science, Machine Learning, or a related field.
Experience working in AI ethics, safety, or responsibility research.
Experience with large-scale data processing and distributed systems.
Publications or contributions to the machine learning research community.
Experience working in a fast-paced, research-driven environment.
Familiarity with the sociotechnical aspects of AI systems.
Benefits
The US base salary range for this full-time position is between $166,000 - $244,000+ bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.