
Fast-track your ML job hunt :
1) The CFO Org has the data required to be Public Company Ready.
2) The CFO Org has all the data it needs to execute swiftly on our AI first roadmap.
3) Controllership is able to close the books without any manual spreadsheets in the shortest timeframe and with zero material risks.
As an Data Engineer on the Finance Data team, you will be setting the foundation to scale analytics across our business functions and impart best data practices for a rapidly growing organization. We aspire to build the Finance team of the future.
In addition, you will work collaboratively with key stakeholders in Finance and other business teams to understand their pain points and take the lead in proposing viable, future-proof solutions to resolve them. You will also autonomously lead your own projects that deliver business impact and help cultivate a mature data culture among Finance teams.
We are looking for a seasoned Data Engineer who has a proven track record of owning the entire data stack at high transaction volume companies, managing business critical ETL pipelines consumed by non-technical teams. In this role you will serve the Head of Supply Chain Systems and be responsible for leading our manufacturing, OTC data mart strategy. The goal is to enable end-to-end asset and inventory traceability for our supply chain infrastructure by end of year 2026. We actually need someone who excels in dynamic environments, adapts quickly to changing needs, and confidently navigates ambiguous or evolving requirements. If you're energized by solving technical problems without a playbook and comfortable wearing multiple hats, this role is for you! To clarify, you will not be responsible for training ML models and neither would we describe this role as ‘product analytics’.
Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).
Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.
Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).
Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.
Solid understanding of Spark and ability to write, debug and optimize Spark code.
This role is exclusively based in our San Francisco HQ. We offer relocation assistance to new employees.
Fast-track your ML job hunt :