frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
london (city of london), south east england, united kingdom
Mastek
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
business to deliver value-driven solutions What were looking for: London/Lloyd's Market experience is essential Strong programming skills in Python and SQL; knowledge of Java or Scala is a plus Solid experience with relational databases and data modelling (Data Vault, Dimensional) Proficiency with ETL tools and cloud platforms (AWS, Azure or GCP) Experience working in Agile and More ❯
Sahaj, helping grow our collective data engineering capability. What we’re looking for Solid experience as a Senior Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
Sahaj, helping grow our collective data engineering capability. What we’re looking for Solid experience as a Senior Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
your team through problem-solving with strong technical leadership. What you'll need Proven track record of leading engineering teams on data-intensive projects. Strong programming skills in Java, Scala, or Python. Proficiency in SQL (including extensions for analytical workloads). Deep knowledge of distributed data stores, frameworks, and ETL/ELT platforms (e.g. Azure Databricks, Informatica). Experience applying More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric, or a strong willingness to learn Experience using version control tools like Git and knowledge of CI/CD pipelines Familiarity with software testing methodologies and More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
london (city of london), south east england, united kingdom
Infosys
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI tooling in your development process A growth mindset and eagerness to work in a fast-paced, mission-driven environment Good More ❯
Manchester Area, United Kingdom Hybrid / WFH Options
TalkTalk
and ensure. Must Have: 8+ years’ experience in data engineering and 1+ experience as Lead Data Engineer Proven expertise with Microsoft Azure Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Expertise in Databricks for large-scale data engineering and analytics workloads. Design and implement scalable data architectures using Databricks Unity Catalog, Delta Lake, and Apache Spark More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
TalkTalk
and ensure. Must Have: 8+ years’ experience in data engineering and 1+ experience as Lead Data Engineer Proven expertise with Microsoft Azure Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Expertise in Databricks for large-scale data engineering and analytics workloads. Design and implement scalable data architectures using Databricks Unity Catalog, Delta Lake, and Apache Spark More ❯
bolton, greater manchester, north west england, united kingdom Hybrid / WFH Options
TalkTalk
and ensure. Must Have: 8+ years’ experience in data engineering and 1+ experience as Lead Data Engineer Proven expertise with Microsoft Azure Strong proficiency in Databricks with hands-on Scala (Spark) and PySpark . Expertise in Databricks for large-scale data engineering and analytics workloads. Design and implement scalable data architectures using Databricks Unity Catalog, Delta Lake, and Apache Spark More ❯
operations teams to ensure solutions are production-ready Contribute to technical planning by estimating effort and assessing implications of user stories Essential Skills & Experience Software development experience in Java, Scala, or Python Hands-on experience with data platforms such as AWS, Azure, GCP, or Databricks Proficient in SQL and analytical query extensions Experience working with data formats like JSON and More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
Contribute to the technical growth of the team through knowledge sharing, mentoring, and innovation. Technologies & Tools: AWS Cloud Services : EMR, Glue, Redshift, Kinesis, Lambda, DynamoDB Programming & Scripting : Java, Python, Scala, Spark, SQL API integrations, data extraction, and transformation workflows Experience with big data processing, analytics, and scalable architectures Qualifications & Skills: Strong problem-solving and analytical skills. Experience in designing and More ❯