data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Billigence
Data architecture and solution design experience Hands-on experience with modern data tools such as dbt, Fivetran, Matillion, or similar data integration platforms Programming skills in Python, Java, or Scala Relevant cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP Data Engineering certifications) Experience with DataOps, CI/CD practices, and infrastructure-as-code Knowledge of data governance, data More ❯
Data architecture and solution design experience Hands-on experience with modern data tools such as dbt, Fivetran, Matillion, or similar data integration platforms Programming skills in Python, Java, or Scala Relevant cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP Data Engineering certifications) Experience with DataOps, CI/CD practices, and infrastructure-as-code Knowledge of data governance, data More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Az-Tec Talent
and collaboratively within client teams. Desirable: Consulting experience or client-facing delivery background. Familiarity with tools such as dbt, Fivetran, Matillion , or similar. Programming skills in Python, Java, or Scala . Cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP). Knowledge of DataOps, CI/CD , and infrastructure-as-code concepts. What’s on Offer Hybrid working model More ❯
and collaboratively within client teams. Desirable: Consulting experience or client-facing delivery background. Familiarity with tools such as dbt, Fivetran, Matillion , or similar. Programming skills in Python, Java, or Scala . Cloud certifications (SnowPro, Databricks Certified, AWS/Azure/GCP). Knowledge of DataOps, CI/CD , and infrastructure-as-code concepts. What’s on Offer Hybrid working model More ❯
on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. Snowflake, BigQuery, Redshift). Strong understanding of data governance, security, and compliance (experience within financial services is a plus). Leadership experience More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
CV TECHNICAL LTD
on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. Snowflake, BigQuery, Redshift). Strong understanding of data governance, security, and compliance (experience within financial services is a plus). Leadership experience More ❯
AI Services, Azure OpenAI, AI Search, Data Lake Storage, Data Factory, Databricks, HDInsight, Azure Synapse Analytics, Azure SQL Database, Functions, and Azure DevOps. Strong programming skills in Python, Java, Scala, R, or .NET/C# . Proficiency in SQL and database design . Solid understanding of data models, data mining, analytics, and segmentation techniques . Experience with ETL pipelines and More ❯
AI Services, Azure OpenAI, AI Search, Data Lake Storage, Data Factory, Databricks, HDInsight, Azure Synapse Analytics, Azure SQL Database, Functions, and Azure DevOps. Strong programming skills in Python, Java, Scala, R, or .NET/C# . Proficiency in SQL and database design . Solid understanding of data models, data mining, analytics, and segmentation techniques . Experience with ETL pipelines and More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG UK
to having resided in the UK for at least the past 5 years and being a UK national or dual UK national. Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
flow issues, optimize performance, and implement error-handling strategies. Optional scripting skills for creating custom NiFi processors. Programming & Data Technologies: Proficiency in Java and SQL. Experience with C# and Scala is a plus. Experience with ETL tools and big data platforms. Knowledge of data modeling, replication, and query optimization. Hands-on experience with SQL and NoSQL databases is desirable. Familiarity More ❯
grow our collective data engineering capability. What we’re looking for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
grow our collective data engineering capability. What we’re looking for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
in consultancy or client-facing roles is highly desirable. Familiarity with CI/CD pipelines and version control tools (e.g., Git, Azure DevOps). Desirable Exposure to Python or Scala for data engineering tasks. Knowledge of Power BI or other visualisation tools. Experience with data governance frameworks and metadata management. If you’re interested, get in touch ASAP with a More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
in consultancy or client-facing roles is highly desirable. Familiarity with CI/CD pipelines and version control tools (e.g., Git, Azure DevOps). Desirable Exposure to Python or Scala for data engineering tasks. Knowledge of Power BI or other visualisation tools. Experience with data governance frameworks and metadata management. If you’re interested, get in touch ASAP with a More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
Experience Expert in Azure Databricks (Unity Catalog, DLT, cluster management). Strong experience with Azure Data Factory, Synapse Analytics, Data Lake Storage, Stream Analytics, Event Hubs. Proficient in Python, Scala, C#, .NET, and SQL (T-SQL). Skilled in data modelling, quality, and metadata management. Experience with CI/CD and Infrastructure as Code using Azure DevOps and Terraform. Strong More ❯
Burton-On-Trent, Staffordshire, West Midlands, United Kingdom
Crimson
Experience Expert in Azure Databricks (Unity Catalog, DLT, cluster management). Strong experience with Azure Data Factory, Synapse Analytics, Data Lake Storage, Stream Analytics, Event Hubs. Proficient in Python, Scala, C#, .NET, and SQL (T-SQL). Skilled in data modelling, quality, and metadata management. Experience with CI/CD and Infrastructure as Code using Azure DevOps and Terraform. Strong More ❯
Experience Expert in Azure Databricks (Unity Catalog, DLT, cluster management). Strong experience with Azure Data Factory, Synapse Analytics, Data Lake Storage, Stream Analytics, Event Hubs. Proficient in Python, Scala, C#, .NET, and SQL (T-SQL). Skilled in data modelling, quality, and metadata management. Experience with CI/CD and Infrastructure as Code using Azure DevOps and Terraform. Strong More ❯
these requirements. In order to secure one of these Senior Data Engineer roles you must be able to demonstrate the following experience: Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
and enhance data performance and infrastructure. Key Skills & Experience: Strong experience with SQL/NoSQL databases, data warehousing, and big data (Hadoop, Spark). Proficient in Python, Java, or Scala with solid OOP and design pattern understanding. Expertise in ETL tools and orchestration frameworks (Airflow, Apache NiFi). Hands-on experience with cloud platforms (AWS, Azure, or GCP) and their More ❯
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Searchability®
Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER – ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion architecture . Hands More ❯
for all major data initiatives. DATA MANAGER - ESSENTIAL SKILLS: Proven experience as a Senior or Lead Data Engineer , Data Manager, or similar leadership role. Strong proficiency in Python (or Scala/Java) and SQL . Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data More ❯