with multiple data analytics tools (e.g. Power BI) Deep understanding of data wareho using concepts, ETL/ELT pipelines and dimensional modelling Proficiency in advanced programming languages (Python/PySpark, SQL) Experience in data pipeline orchestration (e.g. Airflow, Data Factory) Familiarity with DevOps and CI/CD practices (Git, Azure DevOps etc) Ability to communicate technical concepts to both More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Robert Half
integration. Proficient in SQL for complex analytical transformations and optimisations. Comfortable working in agile teams and using Azure DevOps for CI/CD workflows. Nice to Have Python or PySpark for automation and data quality testing. Knowledge of data governance and security frameworks (RBAC, masking, encryption). Prior experience in financial services or insurance environments. All candidates must complete More ❯
lake and Azure Monitor providing added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks such as GDPR, HIPAA, ISO More ❯
with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design Deep understanding in software architecture, object oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Good experience More ❯
Greater Manchester, England, United Kingdom Hybrid/Remote Options
Searchability®
Enhanced Maternity & Paternity Charity Volunteer Days Cycle to work scheme And More.. DATA ENGINEER – ESSTENTIAL SKILLS Proven experience building data pipelines using Databricks . Strong understanding of Apache Spark (PySpark or Scala) and Structured Streaming . Experience working with Kafka (MSK) and handling real-time data . Good knowledge of Delta Lake/Delta Live Tables and the Medallion More ❯
a software engineer or a data engineer and a strong passion to learn. BS/MS in Computer Science or equivalent experience in related fields. Experience in Python, Pandas, PySpark, and Notebooks. SQL knowledge and experience working with relational databases including schema design, access patterns, query performance optimization, etc. Experience with data pipeline technologies like AWS Glue, Airflow, Kafka More ❯
and processes to support innovation at scale What We’re Looking For Strong hands-on experience with Azure Databricks, Data Factory, Blob Storage, and Delta Lake Proficiency in Python, PySpark, and SQL Deep understanding of ETL/ELT, CDC, streaming data, and lakehouse architecture Proven ability to optimise data systems for performance, scalability, and cost-efficiency A proactive problem More ❯
to adapt quickly to changing environments and priorities, maintaining effectiveness in dynamic situations Proficiency using SQL Server in a highly transactional environment. Experience in either C# or Python/PySpark for data engineering or development tasks. Strong understanding of DevOps principles and experience with relevant tools e.g., Azure DevOps, Git, Terraform for CI/CD, automation, and infrastructure management. More ❯
s in Bioinformatics, Computer Science, Data Engineering, or related field. 4+ years of experience in data engineering or bioinformatics data management. Strong Python and SQL skills; experience with Pandas, PySpark, Dask, or similar frameworks. Familiar with Linux, Docker, and modern data architectures (relational, object, non-relational). Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred More ❯
s in Bioinformatics, Computer Science, Data Engineering, or related field. 4+ years of experience in data engineering or bioinformatics data management. Strong Python and SQL skills; experience with Pandas, PySpark, Dask, or similar frameworks. Familiar with Linux, Docker, and modern data architectures (relational, object, non-relational). Experience with orchestration tools (Airflow/Prefect) and cloud platforms (AWS preferred More ❯
and compliance throughout. Key Requirements Active SC Clearance (used within the last 12 months) Proven experience with Databricks (including notebooks, clusters, and job orchestration) Strong knowledge of Apache Spark , PySpark , and distributed data processing Experience building and optimising ETL pipelines and data workflows Familiarity with Delta Lake , SQL , and data modelling best practices Ability to work with large, complex More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
with various businesses and gaining an overview of many different sectors. What We’re Looking For 10 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Strong experience engineering in a front-office/capital markets environment. Previous experience in implementing best practices for More ❯
and prompt engineering. Mandatory Skills: Cloud Platforms:Deep experience with AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica, EventBridge etc. Industrial Data Integration:Familiarity with OT data schema originating from OSIsoft PI, SCADA More ❯
CPG, Consumer Products, Retail, Telecom or Financial Services industries. Applied knowledge of supply chain and associated data, e.g. procurement, manufacturing, logistics Good experience in working with data (Python/PySpark/Databricks) in a cloud-based data systems environment (ideally Azure). Experience in developing using agile software development methodologies, principles such as DevOps, CI/CD, and unit More ❯
CPG, Consumer Products, Retail, Telecom or Financial Services industries. Applied knowledge of supply chain and associated data, e.g. procurement, manufacturing, logistics Good experience in working with data (Python/PySpark/Databricks) in a cloud-based data systems environment (ideally Azure). Experience in developing using agile software development methodologies, principles such as DevOps, CI/CD, and unit More ❯
and client teams on solution design and delivery. Mentor junior engineers and contribute to reusable frameworks and accelerators. Skills & Experience Strong hands-on experience with Databricks , Python , SQL , and PySpark . Solid understanding of Delta Lake , Unity Catalog , and MLflow . Experience with DevOps tools (Git, CI/CD, IaC). Excellent communication and stakeholder skills. Databricks certification and More ❯
Atherstone, Warwickshire, England, United Kingdom Hybrid/Remote Options
Aldi
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
S3 Data Lake, and CloudWatch. Strong knowledge of data extraction, transformation, and loading (ETL) processes, leveraging tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages (Python, PySpark, SQL). Solid understanding of data warehousing and data modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks (GDPR, HIPAA, ISO 27001, NIST, SOX, PII) and AWS More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Recann
we’re looking for 4+ years’ experience in Azure data engineering. Strong skills with Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database. Proficiency in Python, PySpark, SQL, or Scala. Data modelling and relational database expertise. Azure certifications highly desirable. Power BI experience a bonus (but not essential). Why join? Join a forward-thinking organisation More ❯
we’re looking for 4+ years’ experience in Azure data engineering. Strong skills with Azure Data Factory, Azure Data Fabric, Azure Synapse Analytics, Azure SQL Database. Proficiency in Python, PySpark, SQL, or Scala. Data modelling and relational database expertise. Azure certifications highly desirable. Power BI experience a bonus (but not essential). Why join? Join a forward-thinking organisation More ❯
junior engineers and contribute to engineering best practices Required Skills & Experience: 5+ years of experience building and maintaining data pipelines in production environments Strong Python and SQL skills (Pandas, PySpark, query optimisation) Cloud experience (AWS preferred) including S3, Redshift, Glue, Lambda Familiarity with data warehousing (Redshift, Snowflake, BigQuery) Experience with workflow orchestration tools (Airflow, Dagster, Prefect) Understanding of distributed More ❯