in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as ApacheSpark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , ApacheSpark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, ApacheSpark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Omnis Partners
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , ApacheSpark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, ApacheSpark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
modelling tools, data warehousing, ETL processes, and data integration techniques. · Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, Apache Kafka/· Good knowledge of log management, monitoring, and analytics solutions like Splunk, Elastic Stack, New Relic etc Given that this is just a short snapshot of the role we encourage you to apply even if you don't meet all the requirements listed above. We are looking for individuals who strive to make an impact More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Solirius Reply
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
driven decision-making. Responsibilities for the Senior Data Engineer: Design, build, and maintain scalable data pipelines and architectures, ensuring reliability, performance, and best-in-class engineering standards Leverage Databricks, Spark, and modern cloud platforms (Azure/AWS) to deliver clean, high-quality data for analytics and operational insights Lead by example on engineering excellence, mentoring junior engineers and driving … customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and ApacheSpark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like ApacheSpark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow … orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like ApacheSpark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow … orchestration tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
london (westminster), south east england, united kingdom Hybrid/Remote Options
Lloyds Bank
through automation, CI/CD, and modern cloud engineering practices. Wherever you land, you'll be working with some of the biggest datasets in the UK — using everything from Spark and statistical methods to domain knowledge and emerging GenAI applications. The work you could be doing • Design and deploy machine learning models for fraud detection, credit risk, customer segmentation … and behavioural analytics using scalable frameworks like TensorFlow, PyTorch, and XGBoost. • Engineer robust data pipelines and ML workflows using ApacheSpark, Vertex AI, and CI/CD tooling to ensure seamless model delivery and monitoring. • Apply advanced techniques in deep learning, natural language processing (NLP), and statistical modelling to extract insights and drive decision-making. • Explore and evaluate More ❯
data modelling, data warehousing, and ETL development. Hands-on experience with Azure Data Factory, Azure Data Lake, and Azure SQL Database. Exposure to big data technologies such as Hadoop, Spark, and Databricks. Experience with Azure Synapse Analytics or Cosmos DB. Familiarity with data governance frameworks (e.g., GDPR, HIPAA). Experience implementing CI/CD pipelines using Azure DevOps or More ❯
data modelling, data warehousing, and ETL development. Hands-on experience with Azure Data Factory, Azure Data Lake, and Azure SQL Database. Exposure to big data technologies such as Hadoop, Spark, and Databricks. Experience with Azure Synapse Analytics or Cosmos DB. Familiarity with data governance frameworks (e.g., GDPR, HIPAA). Experience implementing CI/CD pipelines using Azure DevOps or More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Higher - AI recruitment
Python and SQL Hands-on experiences with Data Architecture, including: Cloud platforms and orchestration tools (e.g. Dagster, Airflow) AI/MLOps: Model deployment, monitoring, lifecycle management. Big Data Processing: Spark, Databricks, or similar. Bonus: Knowledge Graph engineering, graph databases, ontologies. Located in London And ideally you... Are a zero-to-one builder who thrives on autonomy and ambiguity. Are More ❯