Description
Our team is seeking extraordinary machine learning and GPU programming engineers who are passionate about providing robust compute solutions for accelerating Machine learning libraries on Apple Silicon. Role has the opportunity to influence the design of compute and programming models in next generation GPU architectures.
Responsibilities:
Work on cutting-edge ML inference framework project and optimize code for efficient and scalable ML inference using distributed compute strategies such as data, tensor, pipeline and expert parallelism.
Develop kernel and compiler level optimizations and perform in-depth analysis to ensure the best possible performance across Server hardware families.
Apply advanced model optimization techniques including speculation, quantization, compression, and others to maximize throughput and minimize latency.
Collaborate closely with hardware, compiler, and systems teams to align software performance with hardware capabilities.
Analyze and improve performance metrics such as end-to-end latency, TTFT, TBOT, memory footprint, and compute efficiency.
Implement features of Metal device backend for ML training acceleration technologies
If this sounds of interest, we would love to hear from you!