on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. Snowflake, BigQuery, Redshift). Strong understanding of data governance, security, and compliance (experience within financial services is a plus). Leadership experience More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks. Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle. Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best practices for development More ❯
the estimated effort and technical implications of user stories and user journeys. Coaching and mentoring team members. M INIMUM ( ESSENTIAL ) REQUIREMENTS : Strong software development experience in one of Java, Scala, or Python Software development experience with data-processing platforms from vendors such as AWS, Azure, GCP, Databricks. Experience of developing substantial components for large-scale data processing solutions and deploying More ❯
focus on cloud-based data pipelines and architectures Strong expertise in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as Apache Spark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with More ❯
Burton-on-Trent, Staffordshire, England, United Kingdom
Crimson
Experience Expert in Azure Databricks (Unity Catalog, DLT, cluster management). Strong experience with Azure Data Factory, Synapse Analytics, Data Lake Storage, Stream Analytics, Event Hubs. Proficient in Python, Scala, C#, .NET, and SQL (T-SQL). Skilled in data modelling, quality, and metadata management. Experience with CI/CD and Infrastructure as Code using Azure DevOps and Terraform. Strong More ❯
these requirements. In order to secure one of these Senior Data Engineer roles you must be able to demonstrate the following experience: Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
production-grade systems. Deep expertise in ML frameworks and engineering stacks (TensorFlow, PyTorch, JAX, Ray, MLflow, Kubeflow). Proficiency in Python and at least one backend language (e.g., Java, Scala, Go, C++). Strong understanding of cloud ML infrastructure (AWS SageMaker, GCP Vertex AI, Azure ML) and containerized deployments (Kubernetes, Docker). Hands-on experience with data and model pipelines More ❯
for all major data initiatives. DATA MANAGER - ESSENTIAL SKILLS: Proven experience as a Senior or Lead Data Engineer , Data Manager, or similar leadership role. Strong proficiency in Python (or Scala/Java) and SQL . Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data More ❯
managers on the estimated effort and technical implications of user stories and user journeys.• Coaching and mentoring team members.MINIMUM (ESSENTIAL) REQUIREMENTS:• Strong software development experience in one of Java, Scala, or Python• Software development experience with data-processing platforms from vendors such as AWS, Azure, GCP, Databricks.• Experience of developing substantial components for large-scale data processing solutions and deploying More ❯
Qualification We are looking for experience in the following skills: Relevant work experience in data science, machine learning, and business analytics Practical experience in coding language - e.g., Python, R, Scala, etc. (Python preferred) Strong proficiency in database technologies - e.g., SQL, ETL, No SQL, DW, and Big Data technologies - e.g., PySpark, Hive, etc. Experienced working with structured and also unstructured data More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Syntax Consultancy Limited
+ data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer More ❯
EC4N 6JD, Vintry, United Kingdom Hybrid/Remote Options
Syntax Consultancy Ltd
+ data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer More ❯
Liverpool, Merseyside, England, United Kingdom Hybrid/Remote Options
Lorien
a blend of the following: Proven experience as a Data Engineer, with strong background in designing and implementing data solutions in an Agile environment Proficiency in Python, Java or Scala, plus experience with SQL and NoSQL databases Experience with Microsoft SQL Server and SSRS Familiarity with cloud platforms (AWS, Azure or GCP) and containerisation (Docker, Kubernetes) Strong understanding of CI More ❯
Central London, London, United Kingdom Hybrid/Remote Options
McCabe & Barton
Implement governance and security measures across the platform. Leverage Terraform or similar IaC tools for controlled and reproducible deployments. Databricks Development Develop and optimise data jobs using PySpark or Scala within Databricks. Implement the medallion architecture (bronze, silver, gold layers) and use Delta Lake for reliable data transactions. Manage cluster configurations and CI/CD pipelines for Databricks deployments. Monitoring More ❯
to demonstrate the following experience: Commercial experience gained in a Data Engineering role on any major cloud platform (Azure, AWS or GCP) Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Some experience with the design, build and maintenance of data pipelines and More ❯
Preston, Lancashire, England, United Kingdom Hybrid/Remote Options
Circle Recruitment
processing frameworks and technologies AWS or Azure cloud experience Experience with data modelling, data integration ETL processes and designing efficient data structures Strong programming skills in Python, Java, or Scala Data warehousing concepts and dimensional modelling experience Any data engineering skills in Azure Databricks and Microsoft Fabric would be a bonus This new role involves leading a data team, fostering More ❯
Edinburgh, City of Edinburgh, United Kingdom Hybrid/Remote Options
Cathcart Technology
technologies like Kafka , Spark , Databricks , dbt , and Airflow . You'll know your way around cloud platforms (AWS, GCP, or Azure) and be confident coding in Python , Java , or Scala . Most importantly, you'll understand what it takes to design data systems that are scalable , reliable and built for the long haul. In return, they are offering a competitive More ❯
Employment Type: Permanent
Salary: £80000 - £100000/annum Bonus, Pension and Shares
Edinburgh, Midlothian, Scotland, United Kingdom Hybrid/Remote Options
Cathcart Technology
technologies like Kafka , Spark , Databricks , dbt , and Airflow . You'll know your way around cloud platforms (AWS, GCP, or Azure) and be confident coding in Python , Java , or Scala . Most importantly, you'll understand what it takes to design data systems that are scalable , reliable and built for the long haul. In return, they are offering a competitive More ❯
Nottingham, Nottinghamshire, England, United Kingdom Hybrid/Remote Options
BUZZ Bingo
re Looking For Essential Skills & Experience: Proven experience as a Data Engineer or similar role, with strong knowledge of data warehousing and modelling. Proficiency in C#, Python, Java, or Scala . Hands-on experience with ETL tools (e.g., SSIS) and orchestration tools (e.g., Azure Data Factory). Strong SQL skills and experience with relational databases (MSSQL, PostgreSQL, MySQL). Familiarity More ❯
Coventry, West Midlands, United Kingdom Hybrid/Remote Options
Coventry Building Society
demonstrate, automate and manage data systems so they run smoothly and can grow easily. Familiarity with Docker, Terraform, GitHub Actions, and Vault Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. Experience in testing More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Crimson
maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks. Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency. Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML More ❯
and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Randstad Technologies
and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would More ❯
broad range of problems using your technical skills. Demonstrable experience of utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS More ❯
Preston, Lancashire, England, United Kingdom Hybrid/Remote Options
Circle Recruitment
data pipelines, big data processing frameworks and technologies Experience with data modelling, Databricks, data integration ETL processes and designing efficient data structures Strong programming skills in Python, Java, or Scala Data warehousing concepts and dimensional modelling experience Any data engineering skills in Azure Databricks and Microsoft Fabric would be a bonus This new role involves building & managing a team of More ❯