City of London, London, United Kingdom Hybrid / WFH Options
ECS
Azure services Requirements: 10+ years in cloud data engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of Apache Spark, Delta Lake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics More ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze ? Silver ? Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
Employment Type: Contract
Rate: Up to £0.00 per day + Flexible depending on experience
is optimized. YOUR BACKGROUND AND EXPERIENCE 5 years of commercial experience working as a Data Engineer 3 years exposure to the Azure Stack - Data bricks, Synapse, ADF Python and PySpark Airflow for Orchestration Test-Driven Development and Automated Testing ETL Development More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Syntax Consultancy Limited
modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
CDC. Knowledge of public/enterprise cloud technologies (AWS EC2, S3 Bucket, GCP, Azure) is advantageous but not required. Some skills/experience with automated testing frameworks (Java, Python, PySpark, Bitbucket, Gitlab, Jenkins) is advantageous but not required. Strong Environment Management skill Carbon60, Lorien & SRG - The Impellam Group STEM Portfolio are acting as an Employment Business in relation to More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
AWS/Azure - moving towards Azure). Collaborate with stakeholders and technical teams to deliver solutions that support business growth. Skills & Experience Required: Strong hands-on experience in Python, PySpark, SQL, Jupyter . Experience in Machine Learning engineering or data-focused development. Exposure to working in cloud platforms (AWS/Azure) . Ability to collaborate effectively with senior engineers More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Sanderson Recruitment
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Team Leading experience - REQUIRED/Demonstrable on CV (Full Support from Engineering Manager is also available) Hands on development/engineering background Machine Learning or Data Background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
contract in London. Job description: Role Title: AS Admin Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills . Should have more than 6 years experience . Should have More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
Contract duration: 6 months (can be extended) Location: London Must have skills: Primary Skills - SAS Admin, Enterprise Guide, Basic SAS coding skill Secondary Skills - Unix Scripting, Gitlab, YAML, Autosys, pyspark, Snowflake, AWS, Agile practice, SQL Candidate should have strong experience in SAS Administration and Expert SAS Coding Skills. Should have more than 6 years experience. Should have very good More ❯
Data Developer for an urgent contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with Apache Spark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Oliver James
Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience I'm currently working with a leading insurance broker who is looking to hire a Lead Azure Data Engineer on an initial 12-month fixed-term … an Azure-based data lakehouse. Key requirements: * Proven experience working as a principal or lead data engineer * Strong background working with large datasets, with proficiency in SQL, Python, and PySpark * Experience managing and mentoring engineers with varying levels of experience * Hands-on experience deploying pipelines within Azure Databricks, ideally following the Medallion Architecture framework Hybrid working: Minimum two days More ❯