- Design and implement comprehensive evaluation frameworks to measure LLM capabilities across diverse customer use cases, including text generation, reasoning, code, and domain-specific applications
- Build scalable evaluation infrastructure and pipelines that enable rapid, reproducible assessment of model performance
- Develop novel evaluation methodologies to assess emerging capabilities or verticalized use cases (cybersecurity, finance, healthcare, etc.) and enable the Solutions (Deployment Strategist and Applied AI) on these topics.
- Create custom evaluation suites tailored to enterprise customers' specific needs, working closely with them to understand their requirements and success criteria
- Collaborate with research teams to translate evaluation insights into model improvements and training decisions
- Partner with product teams to continuously improve our evaluation tooling based on customer feedback
How We Work in Applied AI
- We care about people and outputs.
- What matters is what you ship, not the time you spend on it
- Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.
- Always ask why. The best solutions come from deep understanding, not from copying what worked before
- We say what we mean. Feedback is direct, timely, and given because we care.
- No politics. Low ego, high standards.
- We embrace an unstructured environment and find joy in it.