london, south east england, united kingdom Hybrid / WFH Options
eTeam
Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Skillset Required: Experience in Python, PySpark and SQL Experience with AWS is a plus Strong proficiency in Core Java, including Collections, Concurrency, and Memory Management. Proficient in version control systems such as Git, GitLab, or More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
eTeam
Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Skillset Required: Experience in Python, PySpark and SQL Experience with AWS is a plus Strong proficiency in Core Java, including Collections, Concurrency, and Memory Management. Proficient in version control systems such as Git, GitLab, or More ❯
business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how to fine-tune More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
You should be experienced in a data engineering role demonstrating a strong track record of designing, building, and maintaining data pipelines and data architectures. Required Skills - Proficiency in Python, Pyspark, SQL for data manipulation and querying. Experience with containerisation technologies, specifically Kubernetes and Docker. Proven experience in designing and implementing data pipelines, working with big data technologies and architectures. More ❯
You should be experienced in a data engineering role demonstrating a strong track record of designing, building, and maintaining data pipelines and data architectures. Required Skills - Proficiency in Python, Pyspark, SQL for data manipulation and querying. Experience with containerisation technologies, specifically Kubernetes and Docker. Proven experience in designing and implementing data pipelines, working with big data technologies and architectures. More ❯
Have : Experience or Exposure in any one of these technologies. Power BI, Confluent Cloud Nice to Have : Exposure or experience in either of these technologies is beneficial. Python/PySpark/FiveTran Nice to Have : Exposure or experience is a plus. Strong SQL skills, including complex query writing (MS SQL Server) Good experience in data modeling In-depth knowledge More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
london (city of london), south east england, united kingdom
HCLTech
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
london (city of london), south east england, united kingdom
Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
data architecture, data modelling, and big data platforms. Proven expertise in Lakehouse Architecture, particularly with Databricks. Hands-on experience with tools such as Azure Data Factory, Unity Catalog, Synapse, PySpark, Power BI, SQL Server, Cosmos DB, and Python. In-depth knowledge of data governance frameworks and best practices. Solid understanding of cloud-native architectures and microservices in data environments. More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Gerrard White
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar and Emblem software is preferred Proficient at communicating results in More ❯
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
Data Engineer Manager Department: Tech Hub Employment Type: Permanent - Full Time Location: London Description Contract type: Permanent, full-time Hours: 37.5 Salary: circa £78,000 depending on experience Location: London WFH policy: Employees are required to attend the office 2 More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯