data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning … development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the … data privacy, handling of sensitive data (e.g. GDPR) Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options More ❯
actively contribute throughout the Agile development lifecycle , participating in planning, refinement, and review ceremonies. Key Responsibilities: Develop and maintain ETL pipelines in Databricks , leveraging ApacheSpark and Delta Lake . Design, implement, and optimize data transformations and treatments for structured and unstructured data. Work with Hive Metastore and … Unity Catalog for metadata management and access control. Implement State Store mechanisms for maintaining stateful processing in Spark Structured Streaming . Handle DataFrames efficiently for large-scale data processing and analytics. Schedule, monitor, and troubleshoot Databricks pipelines for automated workflow execution. Enable pause/resume functionality in pipelines based … technical impact assessments and rationales. Work within GitLab repository structures and adhere to project-specific processes. Required Skills and Experience: Strong expertise in Databricks , ApacheSpark , and Delta Lake . Experience with Hive Metastore and Unity Catalog for data governance. Proficiency in Python, SQL, Scala , or other relevant More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
a team. Preferred Qualifications: Master's degree in Computer Science, Data Science, or a related field. Experience with big data technologies such as Hadoop, Spark, or Kafka. Experience with data visualization tools such as Power BI, Tableau, or Qlik. Certifications in Azure data and AI technologies. Benefits We offer More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
Skills: x5 + experience with Python programming for data engineering tasks Strong proficiency in SQL and database management Hands-on experience with Databricks and ApacheSpark Familiarity with Azure cloud platform and related services Knowledge of data security best practices and compliance standards Excellent problem-solving and communication More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. Seniority level Seniority level Not Applicable Employment type Employment type Full-time Job function Job More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. More ❯
MLOps, RAG, APIs, and real-time data integration Strong background in working with cloud platforms (GCP, AWS, Azure) and big data technologies (e.g., Kafka, Spark, Snowflake, Databricks) Demonstrated ability to work across matrixed organizations and partner effectively with IT, security, and business stakeholders Experience collaborating with third-party tech More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
GPT), Retrieval-Augmented Generation (RAG). Technical Skills: Deep knowledge of Python (including libraries such as scikit-learn, Pandas, NumPy). Experience with Jupyter, Spark/Scala or R, PostgreSQL, and ELK stack. Proficiency in Java, Kubernetes, Docker, and microservices-oriented architecture. Familiarity with MLOps practices and collaborative tools More ❯
SQL. Performance optimisation of data ingestion and query performance for MSSQL (or transferable skills from another RDBMS) Familiar with data processing frameworks such as Apache Spark. Experience of working with Terabyte data sets and managing rapid data growth. The benefits at APF: At AllPoints Fibre, we're all about More ❯
Reading, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Statistics, Computer Science or a related field. Deep technical knowledge of ML frameworks (TensorFlow, PyTorch), cloud platforms (AWS, GCP, Azure), and big data tools (Spark, Hadoop). Demonstrated success in building business-critical, real-time algorithmic solutions. Strong communication and stakeholder engagement skills – translating complexity into business value. A More ❯
Reading, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
secure or regulated environments Ingest, process, index, and visualise data using the Elastic Stack (Elasticsearch, Logstash, Kibana) Build and maintain robust data flows with Apache NiFi Implement best practices for handling sensitive data, including encryption, anonymisation, and access control Monitor and troubleshoot real-time data pipelines to ensure high … experience as a Data Engineer in secure, regulated, or mission-critical environments Proven expertise with the Elastic Stack (Elasticsearch, Logstash, Kibana) Solid experience with Apache NiFi Strong understanding of data security, governance, and compliance requirements Working knowledge of cloud platforms (AWS, Azure, or GCP), particularly in secure deployments Experience … with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of More ❯
career progression opportunities within the Group, including several high-profile household names What you'll bring: Experience with cloud and big data technologies (e.g., Spark, Databricks, Delta Lake, BigQuery) Familiarity with eventing technologies (e.g., Event Hubs, Kafka) and file formats such as Parquet, Delta, Iceberg Interested in learning more More ❯
data transformation program. The successful candidate will be responsible for defining and migrating a host of applications suites to a Kubernetes infrastructure environment. Requirements: Spark S3 Engine Terraform Ansible CI/CD Hadoop Linux/RHEL – on prem background/container management Grafana or Elastic Search– for observability Desirable More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring … Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, Delta Lake, Data Lake. Please More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare