MySQL, NoSQL) to AWS cloud. Strong knowledge of data extraction, transformation, and loading (ETL) processes, leveraging tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages (Python, PySpark, SQL). Understanding of data warehousing and data modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks (GDPR, HIPAA, ISO 27001, NIST, SOX, PII) and AWS security More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
exchange platforms. Knowledge of dynamic pricing models. Experience with Databricks and using it for scalable data processing and machine learning workflows. Experience working with big data technologies (e.g., Spark, PySpark). Experience with online market research methods/products. Additional Information Our Values Collaboration is our superpower We uncover rich perspectives across the world Success happens together We deliver More ❯
You should be experienced in a data engineering role demonstrating a strong track record of designing, building, and maintaining data pipelines and data architectures. Required Skills - Proficiency in Python, Pyspark, SQL for data manipulation and querying. Experience with containerisation technologies, specifically Kubernetes and Docker. Proven experience in designing and implementing data pipelines, working with big data technologies and architectures. More ❯
You should be experienced in a data engineering role demonstrating a strong track record of designing, building, and maintaining data pipelines and data architectures. Required Skills - Proficiency in Python, Pyspark, SQL for data manipulation and querying. Experience with containerisation technologies, specifically Kubernetes and Docker. Proven experience in designing and implementing data pipelines, working with big data technologies and architectures. More ❯
Have : Experience or Exposure in any one of these technologies. Power BI, Confluent Cloud Nice to Have : Exposure or experience in either of these technologies is beneficial. Python/PySpark/FiveTran Nice to Have : Exposure or experience is a plus. Strong SQL skills, including complex query writing (MS SQL Server) Good experience in data modeling In-depth knowledge More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
london (city of london), south east england, united kingdom
HCLTech
and experienced AWS Lead Data Engineer, who will build, and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and … implementation of scalable, secure, and high-performance data pipelines using PySpark and AWS Glue. • Architect and manage data lakes using AWS Lake Formation, ensuring proper access control and data governance. • Develop and optimize data models (dimensional and normalized) to support analytics and reporting. • Collaborate with analysts and business stakeholders to understand data requirements and deliver robust solutions. • Implement and … Engineering, or related field. • 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with More ❯
understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks. Significant proficiency in SQL and languages like Python, PySpark and/or Scala. Can lead, work independently as well as play a key role in a team. Good communication and interpersonal skills for working in a multicultural work More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
london (city of london), south east england, united kingdom
Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
data architecture, data modelling, and big data platforms. Proven expertise in Lakehouse Architecture, particularly with Databricks. Hands-on experience with tools such as Azure Data Factory, Unity Catalog, Synapse, PySpark, Power BI, SQL Server, Cosmos DB, and Python. In-depth knowledge of data governance frameworks and best practices. Solid understanding of cloud-native architectures and microservices in data environments. More ❯
Trafford Park, Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Ikhoi Recruitment
Health & Safety. About You: Here’s what we’re looking for: · Proficiency in Python and SQL for analysis, model development, and data interrogation. · Experience in handling large datasets with PySpark and managing distributed data processing. · Comfortable deploying statistical or ML models into production environments. · Strong understanding of cloud infrastructure, preferably AWS. · A methodical, problem-solving mindset with high attention More ❯
Central London, London, United Kingdom Hybrid / WFH Options
Gerrard White
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar and Emblem software is preferred Proficient at communicating results in More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW’s Radar software is preferred Proficient at communicating results in a concise More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯
for a hands-on Senior Data Engineer who thrives in technically complex environments and enjoys solving large-scale data pipeline challenges. You'll work with tools like AWS Glue, PySpark, Iceberg, Databricks, and Snowflake , collaborating with data scientists and stakeholders across multiple business units. Key Responsibilities: Design, build, and maintain scalable data pipelines and architectures. Implement secure and efficient … initiatives. Act as a subject matter expert, guiding technical direction and mentoring junior engineers. What We're Looking For: Strong hands-on experience with AWS data engineering tools: Glue, PySpark, Athena, Iceberg, Lake Formation , etc. Proficiency in Python and SQL for data processing and analysis. Deep understanding of data governance, quality, and security best practices. Experience working with market More ❯