We are looking for an experienced Research Engineer to join the Data Ingestion team, which owns the problem of acquiring all of the available data on the internet through a large scale web crawler. Most of Anthropic's research and product builds on top of the best pretrained models that we can produce, which in turn rely on having the best pretraining data. This role combines hands-on engineering with data research—you'll build and scale our crawler infrastructure while also conducting experiments to evaluate and improve data quality. Successfully scaling our data corpus is critical to our continued efforts at producing the best pretrained models.
Develop and maintain our large-scale web crawler
Design and run experiments to evaluate data quality, extraction methods, and crawling strategies
Analyze crawled data to identify patterns, gaps, and opportunities for improvement
Build pipelines for data ingestion, analysis, and quality improvement
Build specialized crawlers for high-value data sources
Collaborate with Pretraining and Tokens teams to create feedback loops between crawled data and data evaluation results.
Collaborate with team members on improving data acquisition processes
Participate in code reviews and debugging sessions
Believe in the transformative potential of advanced AI systems
Are interested in building a large-scale system to acquire all openly accessible information on the internet
Have experience with data research, including designing experiments and analyzing results
Have worked on web crawlers or large-scale data acquisition systems
Are comfortable operating in a hybrid research-engineering role, balancing system building with experimentation