frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
london (city of london), south east england, united kingdom
Mastek
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
business to deliver value-driven solutions What were looking for: London/Lloyd's Market experience is essential Strong programming skills in Python and SQL; knowledge of Java or Scala is a plus Solid experience with relational databases and data modelling (Data Vault, Dimensional) Proficiency with ETL tools and cloud platforms (AWS, Azure or GCP) Experience working in Agile and More ❯
Sahaj, helping grow our collective data engineering capability. What we’re looking for Solid experience as a Senior Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
Sahaj, helping grow our collective data engineering capability. What we’re looking for Solid experience as a Senior Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, Apache Spark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
Azure Data Lake Storage Azure SQL Database Solid understanding of data modeling, ETL/ELT, and warehousing concepts Proficiency in SQL and one or more programming languages (e.g., Python, Scala) Exposure to Microsoft Fabric, or a strong willingness to learn Experience using version control tools like Git and knowledge of CI/CD pipelines Familiarity with software testing methodologies and More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
london (city of london), south east england, united kingdom
Infosys
DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI/CD More ❯
large-scale data processing systems with data tooling such as Spark, Kafka, Airflow, dbt, Snowflake, Databricks, or similar Strong programming skills in languages such as SQL, Python, Go or Scala Demonstrable use and an understanding of effective use of AI tooling in your development process A growth mindset and eagerness to work in a fast-paced, mission-driven environment Good More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
Contribute to the technical growth of the team through knowledge sharing, mentoring, and innovation. Technologies & Tools: AWS Cloud Services : EMR, Glue, Redshift, Kinesis, Lambda, DynamoDB Programming & Scripting : Java, Python, Scala, Spark, SQL API integrations, data extraction, and transformation workflows Experience with big data processing, analytics, and scalable architectures Qualifications & Skills: Strong problem-solving and analytical skills. Experience in designing and More ❯
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Cipher7
Contribute to the technical growth of the team through knowledge sharing, mentoring, and innovation. Technologies & Tools: AWS Cloud Services : EMR, Glue, Redshift, Kinesis, Lambda, DynamoDB Programming & Scripting : Java, Python, Scala, Spark, SQL API integrations, data extraction, and transformation workflows Experience with big data processing, analytics, and scalable architectures Qualifications & Skills: Strong problem-solving and analytical skills. Experience in designing and More ❯
london (city of london), south east england, united kingdom
Retelligence
real-time data pipelines and infrastructure Hands-on experience with distributed data processing using tools like Apache Kafka , Apache Spark Streaming , or Apache Flink Proficient in Python , Java , or Scala Deep understanding of SQL , NoSQL , and time-series databases Proven ability to optimise, troubleshoot, and scale data systems in production Experience with orchestration and deployment tools like Apache Airflow and More ❯
london, south east england, united kingdom Hybrid / WFH Options
Cipher7
Contribute to the technical growth of the team through knowledge sharing, mentoring, and innovation. Technologies & Tools: AWS Cloud Services : EMR, Glue, Redshift, Kinesis, Lambda, DynamoDB Programming & Scripting : Java, Python, Scala, Spark, SQL API integrations, data extraction, and transformation workflows Experience with big data processing, analytics, and scalable architectures Qualifications & Skills: Strong problem-solving and analytical skills. Experience in designing and More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Cipher7
Contribute to the technical growth of the team through knowledge sharing, mentoring, and innovation. Technologies & Tools: AWS Cloud Services : EMR, Glue, Redshift, Kinesis, Lambda, DynamoDB Programming & Scripting : Java, Python, Scala, Spark, SQL API integrations, data extraction, and transformation workflows Experience with big data processing, analytics, and scalable architectures Qualifications & Skills: Strong problem-solving and analytical skills. Experience in designing and More ❯
web/mobile applications or platform with either Java/J2EE or .NET tech stack and database technologies such as Oracle, MySQL, etc. " Exposure to polyglot programming languages like Scala, Python and Golang will be a plus " Ability to read/write code and expertise with various design patterns " Have used NoSQL database such as MongoDB, Cassandra, etc. " Work on More ❯
incl. deep learning, GenAI, LLM, etc. as well as hands on experience on AWS services like SageMaker and Bedrock, and programming skills such as Python, R, SQL, Java, Julia, Scala, Spark/Numpy/Pandas/scikit, JavaScript Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting More ❯
Experience in Cloud Data Pipelines Building cloud data pipelines involves using Azure native programming techniques such as PySpark or Scala and Databricks. These pipelines are essential for tasks like sourcing, enriching, and maintaining structured and unstructured data sets for analysis and reporting. They are also crucial for secondary tasks such as flow pipelines, streamlining AI model performance, and enhancing interaction More ❯