engagement.* Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
Key Skills: Strong SQL skills and experience with relational databases. Hands-on experience with Azure (ADF, Synapse, Data Lake) or AWS/GCP equivalents. Familiarity with scripting languages (Python, PySpark). Knowledge of data modelling and warehouse design (Kimball, Data Vault). Exposure to Power BI to support optimised data models for reporting. Agile team experience, CI/CD More ❯
For further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯
is optimized. YOUR BACKGROUND AND EXPERIENCE 5 years of commercial experience working as a Data Engineer 3 years exposure to the Azure Stack - Data bricks, Synapse, ADF Python and PySpark Airflow for Orchestration Test-Driven Development and Automated Testing ETL Development More ❯
complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Salt Search
EMEA to drive productivity and efficiency. Own sales operations functions including pipeline management, incentive compensation, deal desk, lead management, and contact centre operations . Use SQL and Python (Pandas, PySpark) to analyse data, automate workflows, and generate insights. Design and manage ETL/ELT processes, data models, and reporting automation . Leverage Databricks, Snowflake, and GCP to enable scalable More ❯
business-critical programme. Key Requirements: Proven experience as a Data Engineer, within Healthcare Proficiency in Azure Data Factory, Azure Synapse, Snowflake, and SQL. Strong Python skills, including experience with PySpark and metadata-driven frameworks. Familiarity with cloud platforms (Azure preferred), pipelines, and production code. Solid understanding of relational databases and data modelling (3NF & dimensional). Strong communication skills and More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
and influence decisions. Strong understanding of data models and analytics; exposure to predictive modelling and machine learning is a plus. Proficient in SQL and Python, with bonus points for PySpark, SparkSQL, and Git. Skilled in data visualisation with tools such as Tableau or Power BI. Confident writing efficient code and troubleshooting sophisticated queries. Clear and adaptable communicator, able to More ❯
South East London, London, United Kingdom Hybrid / WFH Options
Certain Advantage
C#, C++, Rust, Java, etc.). Strong background in Azure cloud application development, including security, observability, storage, and database resources. Solid understanding of data engineering tools and technologies (Databricks, PySpark, Lakehouses, Kafka). Advanced mathematics and quantitative analysis skills, ideally with hands-on experience in probabilistic modeling and the valuation of financial derivatives. Domain expertise in derivatives within energy More ❯
C#, C++, Rust, Java, etc.). Strong background in Azure cloud application development, including security, observability, storage, and database resources. Solid understanding of data engineering tools and technologies (Databricks, PySpark, Lakehouses, Kafka). Advanced mathematics and quantitative analysis skills, ideally with hands-on experience in probabilistic modeling and the valuation of financial derivatives. Domain expertise in derivatives within energy More ❯
and stakeholder engagement abilities Strategic mindset with a focus on risk, governance, and transformation Proven ability to lead projects and coach others Technical skills to include: Python or R Pyspark Experience of deploying models AWS cloud Experience of GenAi Experience of working in a large complex organisation. Finance sector would be desirable, but not essential. Locations: London, Northampton, Manchester More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
AWS/Azure - moving towards Azure). Collaborate with stakeholders and technical teams to deliver solutions that support business growth. Skills & Experience Required: Strong hands-on experience in Python, PySpark, SQL, Jupiter . Experience in Machine Learning engineering or data-focused development. Exposure to working in cloud platforms (AWS/Azure) . Ability to collaborate effectively with senior engineers More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
Senior Engineering Level Mentoring/Team Leading experience - Nice to have (Full Support from Engineering Manager) Hands on development/engineering background Machine Learning or Data background Technical Experience: PySpark, Python, SQL, Jupiter Cloud: AWS, Azure (Cloud Environment) - Moving towards Azure Nice to Have: Astro/Airflow, Notebook Reasonable Adjustments: Respect and equality are core values to us. We More ❯
Sunbury-On-Thames, London, United Kingdom Hybrid / WFH Options
BP Energy
systems, and wants to have a direct impact on data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable and reliable ETL/ELT data pipelines using Python, PySpark, and SQL. Develop and manage data workflows and orchestration using tools such as Airflow or similar. Optimize data processes for performance, scalability, and cost-efficiency, particularly in cloud environments. … security and compliance best practices are followed across data systems and processes. Required Qualifications: 5+ years of experience in data engineering or a related field. Proficiency in Python and PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR More ❯
guildford, south east england, united kingdom Hybrid / WFH Options
BP Energy
systems, and wants to have a direct impact on data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable and reliable ETL/ELT data pipelines using Python, PySpark, and SQL. Develop and manage data workflows and orchestration using tools such as Airflow or similar. Optimize data processes for performance, scalability, and cost-efficiency, particularly in cloud environments. … security and compliance best practices are followed across data systems and processes. Required Qualifications: 5+ years of experience in data engineering or a related field. Proficiency in Python and PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR More ❯
sunbury, south east england, united kingdom Hybrid / WFH Options
BP Energy
systems, and wants to have a direct impact on data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable and reliable ETL/ELT data pipelines using Python, PySpark, and SQL. Develop and manage data workflows and orchestration using tools such as Airflow or similar. Optimize data processes for performance, scalability, and cost-efficiency, particularly in cloud environments. … security and compliance best practices are followed across data systems and processes. Required Qualifications: 5+ years of experience in data engineering or a related field. Proficiency in Python and PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR More ❯
WE NEED THE PYTHON/DATA ENGINEER TO HAVE.... Current DV Security Clearance (Standard or Enhanced) Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Python/PySpark experience Experience With Palantir Foundry is nice to have Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design, development, test and More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF) and Databricks or … UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL, Geospatial Data, FME More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
role: Adapt and deploy a powerful data platform to solve complex business problems Design scalable generative AI workflows using modern platforms like Palantir AIP Execute advanced data integration using PySpark and distributed technologies Collaborate directly with clients to understand priorities and deliver outcomes What We're Looking For: Strong skills in PySpark, Python, and SQL Ability to translate More ❯