while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
while staying close to the code. Perfect if you want scope for growth without going “post-technical.” What you’ll do Design and build modern data platforms using Databricks, ApacheSpark, Snowflake, and cloud-native services (AWS, Azure, or GCP). Develop robust pipelines for real-time and batch data ingestion from diverse and complex sources. Model and … for Solid experience as a Senior/Lead Data Engineer in complex enterprise environments. Strong coding skills in Python (Scala or functional languages a plus). Expertise with Databricks, ApacheSpark, and Snowflake (HDFS/HBase also useful). Experience integrating large, messy datasets into reliable, scalable data products. Strong understanding of data modelling, orchestration, and automation. Hands More ❯
in Microsoft Fabric and Databricks, including data pipeline development, data warehousing, and data lake management Proficiency in Python, SQL, Scala, or Java Experience with data processing frameworks such as ApacheSpark, Apache Beam, or Azure Data Factory Strong understanding of data architecture principles, data modelling, and data governance Experience with cloud-based data platforms, including Azure and More ❯
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and ApacheSpark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
City of London, London, United Kingdom Hybrid/Remote Options
N Consulting Global
generation data platform at FTSE Russell — and we want you to shape it with us. Your role will involve: • Designing and developing scalable, testable data pipelines using Python and ApacheSpark • Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 • Applying modern software engineering practices: version control, CI/CD, modular design, and automated … testing • Contributing to the development of a lakehouse architecture using Apache Iceberg • Collaborating with business teams to translate requirements into data-driven solutions • Building observability into data flows and implementing basic quality checks • Participating in code reviews, pair programming, and architecture discussions • Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING … ideally with type hints, linters, and tests like pytest) Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines Has experience with or is eager to learn ApacheSpark for large-scale data processing Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) Enjoys learning the business context and working closely with stakeholders More ❯
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , ApacheSpark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, ApacheSpark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Omnis Partners
across sectors such as financial services, pharmaceuticals, energy, retail, healthcare, and manufacturing. The Role: Data Engineer (Databricks) We are seeking an experienced Data Engineer with strong expertise in Databricks , ApacheSpark, Delta Lake, Python, and SQL to take a lead role in delivering innovative data projects. You will design and build scalable, cloud-based data pipelines on platforms … Apply modern engineering practices including CI/CD and automated testing. What You Bring: Proven experience as a Data Engineer working in cloud environments. Expert-level knowledge of Databricks, ApacheSpark, and Delta Lake. Advanced Python and SQL programming skills. Strong understanding of CI/CD pipelines, automated testing, and data governance. Excellent communication and stakeholder engagement skills. More ❯
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Greetings! Adroit People is currently hiring Title: Senior AWS Data Engineer Location: London, UK Work Mode: Hybrid-3 DAYS/WEEK Duration: 12 Months FTC Keywords: AWS,PYTHON,APACHE,SPARK,ETL Job Spec: We are building the next-generation data platform at FTSE Russell and we want you to shape it with us. Your role will involve: Designing … and developing scalable, testable data pipelines using Python and ApacheSpark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache Iceberg Collaborating with business teams to translate requirements More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Solirius Reply
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as More ❯
have framework experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience, such as More ❯
two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tata Consultancy Services
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
with AWS Cloud-native data platforms, including: AWS Glue, Lambda, Step Functions, Athena, Redshift, S3, CloudWatch AWS SDKs, Boto3, and serverless architecture patterns Strong programming skills in Python and ApacheSpark Proven experience in Snowflake data engineering, including: Snowflake SQL, Snowpipe, Streams & Tasks, and performance optimization Integration with AWS services and orchestration tools Expertise in data integration patterns More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
. Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred). Proven experience handling large-scale biological or multi-omics datasets. Bonus: exposure to distributed computing (Spark, Databricks, Kubernetes) or data cataloguing systems. You Are Curious and scientifically minded, with a strong understanding of biological data workflows. Collaborative and able to communicate effectively across computational and More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
driven decision-making. Responsibilities for the Senior Data Engineer: Design, build, and maintain scalable data pipelines and architectures, ensuring reliability, performance, and best-in-class engineering standards Leverage Databricks, Spark, and modern cloud platforms (Azure/AWS) to deliver clean, high-quality data for analytics and operational insights Lead by example on engineering excellence, mentoring junior engineers and driving … customer data Continuously improve existing systems, introducing new technologies and methodologies that enhance efficiency, scalability, and cost optimisation Essential Skills for the Senior Data Engineer: Proficient with Databricks and ApacheSpark, including performance tuning and advanced concepts such as Delta Lake and streaming Strong programming skills in Python with experience in software engineering principles, version control, unit testing More ❯
Redshift). Strong knowledge of ETL/ELT processes and tools. Solid experience of utilising PowerBI or similar visualisation tools Experience working with big data technologies and frameworks (e.g., Spark) Excellent problem-solving skills and a proactive approach to data engineering challenges. Strong communication skills with the ability to articulate complex technical concepts to non-technical stakeholders. Desirable Skills More ❯
Redshift). Strong knowledge of ETL/ELT processes and tools. Solid experience of utilising PowerBI or similar visualisation tools Experience working with big data technologies and frameworks (e.g., Spark) Excellent problem-solving skills and a proactive approach to data engineering challenges. Strong communication skills with the ability to articulate complex technical concepts to non-technical stakeholders. Desirable Skills More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
CV TECHNICAL LTD
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if you're More ❯
Docker/Kubernetes), and distributed systems. Experience leading cross-functional teams across backend, frontend, DevOps, and data. Strong background in data pipelines, APIs, and real-time data processing (Kafka, Spark, GraphDBs, etc.). Familiarity with AI/ML integration , large-scale data architecture, and analytics platforms is a strong plus. Strong communication and stakeholder management skills, able to work More ❯
Docker/Kubernetes), and distributed systems. Experience leading cross-functional teams across backend, frontend, DevOps, and data. Strong background in data pipelines, APIs, and real-time data processing (Kafka, Spark, GraphDBs, etc.). Familiarity with AI/ML integration , large-scale data architecture, and analytics platforms is a strong plus. Strong communication and stakeholder management skills, able to work More ❯