Engineer with an Azure focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Skills/Experience Design and … build high-performance data pipelines: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Develop and maintain secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Build and deploy AI/ML models: Integrate Machine … and best practices with a focus on how AI can support you in your delivery work Solid experience as a Data Engineer or similar role. Proven expertise in Databricks, ApacheSpark, and data pipeline development and strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks and Azure More ❯
an Azure and Databrick focus, you will be an integral part of our team dedicated to building scalable and secure data platforms. You will leverage your expertise in Databricks, ApacheSpark, and Azure to design, develop, and implement data warehouses, data lakehouses, and AI/ML models that fuel our data-driven operations. Duties Design and build high … performance data platforms: Utilize Databricks and ApacheSpark to extract, transform, and load data into Azure Data Lake Storage and other Azure services. Design and oversee the delivery of secure data warehouses and data lakehouses: Implement data models, data quality checks, and governance practices to ensure reliable and accurate data. Abilty to Design, Build and deploy AI/… to ensure successful data platform implementations. Your Skills and Experience Solid experience as a Data Architect with experience in designing, developing and implementing Databricks solutions Proven expertise in Databricks, ApacheSpark, and data platforms with a strong understanding of data warehousing concepts and practices. Experience with Microsoft Azure cloud platform, including Azure Data Lake Storage, Databricks, and Azure More ❯
Greater London, England, United Kingdom Hybrid/Remote Options
CommuniTech Recruitment Group
Data Developer. C# + (either Clickhouse, SingleStore, Rockset, TimescaleDB) + open standard datalake (e.g. Iceberg or Delta tables, ApacheSpark, Column store). £700/Day. 6 month rolling. Hybrid. My client is a top tier commodities trading firm that is looking for a strong C# Data Engineer. The key things in summary are: Strong experience of .NET … Have you worked with an analytical database such as Clickhouse, SingleStore, Rockset, TimescaleDB? Have you got any experience working with an open standard datalake (e.g. Iceberg or Delta tables, ApacheSpark, Column store) Have you got any experience processing (e.g. ingesting into a database) a large amount of data (in batches would be fine)? Required Skills and Experience More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at … indexing, partitioning. Hands-on IaC development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data … e.g. access management, data privacy, handling of sensitive data (e.g. GDPR) Desirable Skills Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options for processing unbounded data (pubsub More ❯
City of London, London, United Kingdom Hybrid/Remote Options
N Consulting Global
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and ApacheSpark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and ApacheSpark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Luton, England, United Kingdom Hybrid/Remote Options
easyJet
field. Technical Skills Required • Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). • Experience with ApacheSpark or any other distributed data programming frameworks. • Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. • Experience with cloud infrastructure like AWS … Skills • Hands-on development experience in an airline, e-commerce or retail industry • Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. • Experience implementing end-to-end monitoring, quality checks, lineage tracking and automated alerts to ensure reliable and trustworthy data across the platform. • Experience of More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
Coventry, West Midlands, United Kingdom Hybrid/Remote Options
Coventry Building Society
Experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. More ❯
Coventry, Warwickshire, United Kingdom Hybrid/Remote Options
Coventry Building Society
Experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. More ❯
Manchester, Lancashire, United Kingdom Hybrid/Remote Options
CHEP UK Ltd
plus work experience BS & 5+ years of work experience MS & 4+ years of work experience Proficient with machine learning and statistics Proficient with Python, deep learning frameworks, Computer Vision, Spark Have produced production level algorithms Proficient in researching, developing, synthesizing new algorithms and techniques Excellent communication skills Desirable Qualifications Master's or PhD level degree 5+ years of work More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
driven decision-making. Responsibilities for the Senior Data Engineer: Design, build, and maintain scalable data pipelines and architectures, ensuring reliability, performance, and best-in-class engineering standards Leverage Databricks, Spark, and modern cloud platforms (Azure/AWS) to deliver clean, high-quality data for analytics and operational insights Lead by example on engineering excellence, mentoring junior engineers and driving … customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and ApacheSpark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
CV TECHNICAL LTD
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
Greater Manchester, England, United Kingdom Hybrid/Remote Options
Searchability®
Development Opportunities Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER – ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of ApacheSpark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
Skills Strong experience with Azure data services (Data Factory, Synapse, Blob Storage, etc.). Proficiency in SQL for data manipulation, transformation, and performance optimisation. Hands-on experience with Databricks (Spark, Delta Lake, notebooks). Solid understanding of data architecture principles and cloud-native design. Experience working in consultancy or client-facing roles is highly desirable. Familiarity with CI/ More ❯
Leeds, England, United Kingdom Hybrid/Remote Options
KPMG UK
having resided in the UK for at least the past 5 years and being a UK national or dual UK national. Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
Austin, Texas, United States Hybrid/Remote Options
OSI Engineering
Big Data & Cloud Infrastructure Knowledge of building and deploying cloud based applications Hands-on experience with cloud data platforms (AWS/GCP/Azure) Proficiency with big data technologies (Spark, Kafka, or similar streaming platforms) Experience with data warehouses (Snowflake, BigQuery, Redshift) and data lakes Knowledge of containerization (Docker/Kubernetes) and infrastructure as code Preferred Experience Experience building More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid/Remote Options
Vivedia Ltd
/ELT pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to create meaningful business impact. Soft Skills: Excellent More ❯
Reigate, England, United Kingdom Hybrid/Remote Options
esure Group
exposure to cloud-native data infrastructures (Databricks, Snowflake) especially in AWS environments is a plus Experience in building and maintaining batch and streaming data pipelines using Kafka, Airflow, or Spark Familiarity with governance frameworks, access controls (RBAC), and implementation of pseudonymisation and retention policies Exposure to enabling GenAI and ML workloads by preparing model-ready and vector-optimised datasets More ❯