week Contract role (6 to 12 Months) Skills/Qualifications: 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar cloud More ❯
Binley Woods, Warwickshire, UK Hybrid / WFH Options
Lorien
/executing tests Requirements Strong experience as a Data Engineer (migrating legacy systems onto AWS, building data pipelines) Strong Python experience Tech stack experience required: AWS Glue, Redshift, Lambda, PySpark, Airflow SSIS or SAS experience (Desirable) Benefits Salary up to 57,500 + up to 20% bonus Hybrid working: once a fortnight in the office 28 days holiday plus More ❯
ll Actually Do: Design & Build Data Pipelines: Take full ownership of designing, building, and managing the full lifecycle of complex data pipelines using Azure Data Factory, Databricks (Python/PySpark), and advanced SQL. Productionise Databricks: Lead the development of robust, scalable solutions on Databricks. This is role focused on production code, Delta Lake, Structured Streaming, and Spark performance tuningnot … in implementing and managing CI/CD pipelines for data solutions, specifically using Azure DevOps. Expert Programming Skills: Expert-level skills for data transformation and automation, especially in Python (PySpark) and advanced SQL. Data Warehousing & Modelling: Proven experience in data warehousing principles and designing data models (e.g., dimensional, medallion) to support analytics. Exceptional Communication: The ability to translate complex More ❯