with multiple data analytics tools (e.g. Power BI) Deep understanding of data wareho using concepts, ETL/ELT pipelines and dimensional modelling Proficiency in advanced programming languages (Python/PySpark, SQL) Experience in data pipeline orchestration (e.g. Airflow, Data Factory) Familiarity with DevOps and CI/CD practices (Git, Azure DevOps etc) Ability to communicate technical concepts to both More ❯
logistics, utilities, airlines etc). Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git, CI/CD pipelines, and More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Nice to Have Python or PySpark for automation and data quality testing. Knowledge of data governance and security frameworks (RBAC, masking, encryption). Prior experience in financial services or insurance environments. All candidates must complete More ❯
with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design Deep understanding in software architecture, object oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Good experience More ❯
Greater Manchester, England, United Kingdom Hybrid / WFH Options
Searchability®
Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER – ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion More ❯
Birmingham, West Midlands, England, United Kingdom
SF Recruitment
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
Sutton Coldfield, Birmingham, West Midlands (County), United Kingdom
SF Recruitment
end (Data Factory, Synapse, Fabric, or Databricks). Strong SQL development and data modelling capability. Experience integrating ERP or legacy systems into cloud data platforms. Proficiency in Python or PySpark for transformation and automation. Understanding of data governance, access control, and security within Azure. Hands-on experience preparing data for Power BI or other analytics tools. Excellent communication skills More ❯
lake and Azure Monitor providing added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks such as GDPR, HIPAA, ISO More ❯
to adapt quickly to changing environments and priorities, maintaining effectiveness in dynamic situations Proficiency using SQL Server in a highly transactional environment. Experience in either C# or Python/PySpark for data engineering or development tasks. Strong understanding of DevOps principles and experience with relevant tools e.g., Azure DevOps, Git, Terraform for CI/CD, automation, and infrastructure management. More ❯
and compliance throughout. Key Requirements Active SC Clearance (used within the last 12 months) Proven experience with Databricks (including notebooks, clusters, and job orchestration) Strong knowledge of Apache Spark , PySpark , and distributed data processing Experience building and optimising ETL pipelines and data workflows Familiarity with Delta Lake , SQL , and data modelling best practices Ability to work with large, complex More ❯
and processes to support innovation at scale What We’re Looking For Strong hands-on experience with Azure Databricks, Data Factory, Blob Storage, and Delta Lake Proficiency in Python, PySpark, and SQL Deep understanding of ETL/ELT, CDC, streaming data, and lakehouse architecture Proven ability to optimise data systems for performance, scalability, and cost-efficiency A proactive problem More ❯
schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow orchestration tools ( Apache Airflow More ❯
schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow orchestration tools ( Apache Airflow More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
CPG, Consumer Products, Retail, Telecom or Financial Services industries. Applied knowledge of supply chain and associated data, e.g. procurement, manufacturing, logistics Good experience in working with data (Python/PySpark/Databricks) in a cloud-based data systems environment (ideally Azure). Experience in developing using agile software development methodologies, principles such as DevOps, CI/CD, and unit More ❯
CPG, Consumer Products, Retail, Telecom or Financial Services industries. Applied knowledge of supply chain and associated data, e.g. procurement, manufacturing, logistics Good experience in working with data (Python/PySpark/Databricks) in a cloud-based data systems environment (ideally Azure). Experience in developing using agile software development methodologies, principles such as DevOps, CI/CD, and unit More ❯
London, England, United Kingdom Hybrid / WFH Options
Client Server
are an experienced Data Engineer within financial services environments You have expertise with GCP including BigQuery, Pub/Sub, Cloud Composer and IAM You have strong Python, SQL and PySpark skills You have experience with real-time data streaming using Kafka or Spark You have a good knowledge of Data Lakes, Data Warehousing, Data Modelling You're familiar with More ❯
we’re looking for 4+ years’ experience in Azure data engineering. Strong skills with Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database. Proficiency in Python, PySpark, SQL, or Scala. Data modelling and relational database expertise. Azure certifications highly desirable. Power BI experience a bonus (but not essential). Why join? Join a forward-thinking organisation More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Recann
we’re looking for 4+ years’ experience in Azure data engineering. Strong skills with Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database. Proficiency in Python, PySpark, SQL, or Scala. Data modelling and relational database expertise. Azure certifications highly desirable. Power BI experience a bonus (but not essential). Why join? Join a forward-thinking organisation More ❯
junior engineers and contribute to engineering best practices Required Skills & Experience: 5+ years of experience building and maintaining data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed More ❯
junior engineers and contribute to engineering best practices Required Skills & Experience: 5+ years of experience building and maintaining data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed More ❯
Atherstone, Warwickshire, England, United Kingdom Hybrid / WFH Options
Aldi
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯