data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning … development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the … data privacy, handling of sensitive data (e.g. GDPR) Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options More ❯
data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning … development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the … data privacy, handling of sensitive data (e.g. GDPR) Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
Skills: x5 + experience with Python programming for data engineering tasks Strong proficiency in SQL and database management Hands-on experience with Databricks and ApacheSpark Familiarity with Azure cloud platform and related services Knowledge of data security best practices and compliance standards Excellent problem-solving and communication More ❯
Skills: x5 + experience with Python programming for data engineering tasks Strong proficiency in SQL and database management Hands-on experience with Databricks and ApacheSpark Familiarity with Azure cloud platform and related services Knowledge of data security best practices and compliance standards Excellent problem-solving and communication More ❯
and operating large scale data processing systems. Has successfully led data platform initiatives. A good understanding of data processing technologies and tools such as ApacheSpark, Data Lake, Data Warehousing and SQL Databases. Proficiency in programming languages such as Python and CICD techniques to efficiently deliver change in More ❯
high-impact analytics and machine learning solutions. Key Responsibilities Engineer and maintain modern data platforms with a strong focus on Databricks, including Delta Lake, ApacheSpark, and MLflow Build and optimise CI/CD pipelines, infrastructure-as-code (IaC), and cloud integrations (Azure preferred, AWS/GCP beneficial … Qualifications 7+ years of experience in platform engineering, DevOps, or cloud infrastructure, with a focus on data platforms Advanced hands-on experience with Databricks, Spark, Delta Lake, and Python Proficient with Azure services (Data Factory, Storage, DevOps) and experience with IaC tools (Terraform, Bicep, ARM) Experience supporting data pipelines More ❯
Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency More ❯
Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency More ❯
and operating large scale data processing systems. Has successfully led data platform initiatives. A good understanding of data processing technologies and tools such as ApacheSpark, Data Lake, Data Warehousing and SQL Databases. Proficiency in programming languages such as Python and CICD techniques to efficiently deliver change in More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
instruments, and risk assessment methodologies. Good experience in Data Analytics and interpretation of large volumes of Data. Programming experience with any language like Python, Spark, R, SAS, SQL and related tools such as SQL Server, Toad, PyCharm, Jupiter Notebook, Hue, Beeline. Experience with MS Suite and any presentation and More ❯
Warrington, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
UK-based consultancy seeking a skilled professional to shape data strategies, mentor dynamic teams, and deliver cutting-edge solutions. With hands-on expertise in Spark, SQL, and cloud platforms like Azure, you’ll lead end-to-end projects, drive innovation, and collaborate with clients across industries. What You’ll … in ETL, data modelling, and Azure Data Services. Experience in designing and implementing data pipelines, data lakes, and data warehouses. Hands-on experience with ApacheSpark and bonus points for Microsoft Fabric Any certifications are a bonus. Hybrid work once a week into their Central Manchester office Learning More ❯
internal and external intelligence, fraud, and business data, supporting cybercrime campaign analysis. Practical experience with relational and non-relational databases, Python, Jupyter Notebook, Hadoop, Spark, and REST APIs. Knowledge of descriptive and prescriptive analytics, understanding data distributions, and machine learning algorithms such as regression, clustering, neural networks, and more. More ❯
Chester, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
secure or regulated environments Ingest, process, index, and visualise data using the Elastic Stack (Elasticsearch, Logstash, Kibana) Build and maintain robust data flows with Apache NiFi Implement best practices for handling sensitive data, including encryption, anonymisation, and access control Monitor and troubleshoot real-time data pipelines to ensure high … experience as a Data Engineer in secure, regulated, or mission-critical environments Proven expertise with the Elastic Stack (Elasticsearch, Logstash, Kibana) Solid experience with Apache NiFi Strong understanding of data security, governance, and compliance requirements Working knowledge of cloud platforms (AWS, Azure, or GCP), particularly in secure deployments Experience … with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of More ❯
Chester, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
career progression opportunities across Group, which includes several high profile household name What you'll bring: Experience with Cloud and big data technologies (e.g. Spark/Databricks/Delta Lake/BigQuery). Familiarity with eventing technologies (e.g. Event Hubs/Kafka) and file formats such as Parquet/ More ❯
supporting physical data store design; ensuring compliance with client policies; and leading Architecture Committee governance approvals. Experience with Data Products, Data Mesh, ETL, EDW, Spark, Hive, Kafka, Pub-Sub, Jira is required. Knowledge of GCP, Harness, Azure is expected. UK Mortgages background is a plus but not mandatory. Ability More ❯