and data stores to support organizational requirements. Skills/Qualifications: 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar cloud More ❯
Data Factory or equivalent cloud ETL tools, with experience building scalable, maintainable pipelines is essential. Extensive experience as a senior data or integrations engineer. Hands on experience on Python, Pyspark or Spark in an IDE. Data Bricks highly preferred. Proven track record in complex Data Engineering environments, including data integration and orchestration. Experience integrating external systems via REST APIs More ❯
tooling, and engineering standards. We'd love to talk to you if you have: Proven experience delivering scalable, cloud-based data platforms using tools such as Databricks, Spark or PySpark, and services from AWS, Azure, or GCP. Experience in line management and people development. You've supported engineers with regular 1:1s, development planning, and performance conversations, and are More ❯