critical business functions. What you’ll be doing Building and maintaining scalable data pipelines using Azure Data Factory, Azure Data Fabric, and Azure Synapse Analytics. Developing robust ELT/ETL processes to integrate data from multiple business systems. Ensuring data consistency, security, and compliance (including GDPR). Supporting analytics/reporting teams with clean, structured datasets. Collaborating with IT, Finance More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Recann
critical business functions. What you’ll be doing Building and maintaining scalable data pipelines using Azure Data Factory, Azure Data Fabric, and Azure Synapse Analytics. Developing robust ELT/ETL processes to integrate data from multiple business systems. Ensuring data consistency, security, and compliance (including GDPR). Supporting analytics/reporting teams with clean, structured datasets. Collaborating with IT, Finance More ❯
portfolio managers, quants, and analysts to design and deliver scalable, cloud-based data solutions that power trading and investment strategies. Key Responsibilities Design, build, and optimise data pipelines andETL workflows using AWS, Python, and SQL. Develop and maintain data models, ensuring accuracy and reliability of trading and market data. Deliver Power BI dashboards and reports to provide real-time More ❯
portfolio managers, quants, and analysts to design and deliver scalable, cloud-based data solutions that power trading and investment strategies. Key Responsibilities Design, build, and optimise data pipelines andETL workflows using AWS, Python, and SQL. Develop and maintain data models, ensuring accuracy and reliability of trading and market data. Deliver Power BI dashboards and reports to provide real-time More ❯
years of experience in data engineering or a similar role Strong SQL skills and proficiency in at least one programming language (ideally Python) Understanding of data warehousing concepts andETL/ELT patterns Experience with version control (Git), testing, and code review practices Familiarity with cloud-based data environments (e.g. AWS, GCP, or Azure) Exposure to modern data tools such More ❯
years of experience in data engineering or a similar role Strong SQL skills and proficiency in at least one programming language (ideally Python) Understanding of data warehousing concepts andETL/ELT patterns Experience with version control (Git), testing, and code review practices Familiarity with cloud-based data environments (e.g. AWS, GCP, or Azure) Exposure to modern data tools such More ❯
to detail. Strong interpersonal and communication skills, with experience liaising between technical and business teams. Understanding of data quality principles and governance frameworks. Desirable Skills and Experience Familiarity with ETL or data pipeline concepts and tools (e.g. Pentaho, MuleSoft, dbt Labs). Knowledge of data warehousing and reporting best practices. Understanding of data modelling and metadata management. Awareness of GDPR More ❯
to detail. Strong interpersonal and communication skills, with experience liaising between technical and business teams. Understanding of data quality principles and governance frameworks. Desirable Skills and Experience Familiarity with ETL or data pipeline concepts and tools (e.g. Pentaho, MuleSoft, dbt Labs). Knowledge of data warehousing and reporting best practices. Understanding of data modelling and metadata management. Awareness of GDPR More ❯
Effective communicator with experience engaging both technical and non-technical stakeholders. Understanding of data quality principles and data governance frameworks within an education setting. Desirable Skills & Experience Familiarity with ETL or data pipeline tools (e.g., Pentaho, MuleSoft, dbt Labs). Knowledge of data warehousing and reporting best practices. Understanding of data modelling and metadata management. Awareness of GDPR and data More ❯
Effective communicator with experience engaging both technical and non-technical stakeholders. Understanding of data quality principles and data governance frameworks within an education setting. Desirable Skills & Experience Familiarity with ETL or data pipeline tools (e.g., Pentaho, MuleSoft, dbt Labs). Knowledge of data warehousing and reporting best practices. Understanding of data modelling and metadata management. Awareness of GDPR and data More ❯
JavaScript is a plus but not required. Experience implementing development best practices including writing automated testing and CI/CD deployment. Responsibilities : Build and maintain reliable data pipelines andETL processes for data ingestion and transformation. Support the development and maintenance of data models and data warehouses used for reporting and analytics. Collaborate with senior engineers, analysts, and product teams More ❯
JavaScript is a plus but not required. Experience implementing development best practices including writing automated testing and CI/CD deployment. Responsibilities : Build and maintain reliable data pipelines andETL processes for data ingestion and transformation. Support the development and maintenance of data models and data warehouses used for reporting and analytics. Collaborate with senior engineers, analysts, and product teams More ❯
metadata management, and data governance. Experience with modern data platforms such as Azure Data Lake, Databricks, Power BI, and SAP BTP. Solid grasp of enterprise integration patterns (APIs, streaming, ETL/ELT, event-driven architectures). Ability to translate complex data concepts into clear, value-focused business outcomes. Excellent stakeholder management and communication skills across technical and non-technical audiences. More ❯
metadata management, and data governance. Experience with modern data platforms such as Azure Data Lake, Databricks, Power BI, and SAP BTP. Solid grasp of enterprise integration patterns (APIs, streaming, ETL/ELT, event-driven architectures). Ability to translate complex data concepts into clear, value-focused business outcomes. Excellent stakeholder management and communication skills across technical and non-technical audiences. More ❯
with relational databases including schema design, access patterns, query performance optimization, etc. Experience with data pipeline technologies like AWS Glue, Airflow, Kafka, or other cloud based equivalence. Experience with ETLand data warehousing like Databricks, Snowflake, or equivalent. Container-based deployment experience using Docker and Kubernetes. Strong verbal and written communication skills. NICE TO HAVE: Experience working with or data More ❯
your chance to work on exciting projects, design robust data architectures, and help organisations unlock the full potential of their data. Key Responsibilities: Build and optimise data pipelines andETL workflows using Microsoft Fabric and Azure Synapse or Databricks. Implement scalable solutions for data ingestion, storage, and transformation. Develop clean, reusable Python code for data engineering tasks. Research and integrate More ❯
the business build a new Azure platform, so exposure to Azure, ADF and Azure DevOps would also be highly beneficial.Principal Responsibilities: Perform the day to day running of the ETL processes that feed into the central data repository. Work with key stakeholders and other teams to gather requirements, identify where the data is located and to then implement the required More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
your chance to work on exciting projects, design robust data architectures, and help organisations unlock the full potential of their data. Key Responsibilities: Build and optimise data pipelines andETL workflows using Microsoft Fabric and Azure Synapse or Databricks. Implement scalable solutions for data ingestion, storage, and transformation. Develop clean, reusable Python code for data engineering tasks. Research and integrate More ❯
office. What will you be doing? Deploy and manage ML models across dev to production at scale Build and maintain cloud-based data science environments Automate pipelines and services (ETL, storage, databases) Collaborate with data scientists and engineers Explore new tools to boost ML performance and reliability What are we looking for? Solid MLOps or ML Engineering experience Strong Python More ❯
office. What will you be doing? Deploy and manage ML models across dev to production at scale Build and maintain cloud-based data science environments Automate pipelines and services (ETL, storage, databases) Collaborate with data scientists and engineers Explore new tools to boost ML performance and reliability What are we looking for? Solid MLOps or ML Engineering experience Strong Python More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
eTech Partners
ecosystem. Writing advanced SQL and optimising complex queries. Building and orchestrating data pipelines across modern cloud platforms. Applying expert knowledge of data integration tools and methods. Designing ELT/ETL workflows using medallion architecture and lakehouse principles. Developing and maintaining data warehouses and data marts. Skills Needed 2+ years’ experience in data engineering, with at least 1 year on Microsoft More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
eTech Partners
ecosystem. Writing advanced SQL and optimising complex queries. Building and orchestrating data pipelines across modern cloud platforms. Applying expert knowledge of data integration tools and methods. Designing ELT/ETL workflows using medallion architecture and lakehouse principles. Developing and maintaining data warehouses and data marts. Skills Needed 2+ years’ experience in data engineering, with at least 1 year on Microsoft More ❯
Ability to write Spark code for large-scale data processing, including RDDs, DataFrames, and Spark SQL. Hands-on experience with lakehouses, dataflows, pipelines, and semantic models. Ability to build ETL workflows. Familiarity with time-series data, market feeds, transactional records, and risk metrics. Familiarity with Git, DevOps pipelines, and automated deployment. Strong communication skills with a collaborative mindset to work More ❯
datasets. You’ll collaborate closely with scientists, bioinformaticians, and ML engineers to deliver robust, compliant, and reusable data solutions that drive research and discovery. Key Responsibilities Develop and maintain ETL pipelines for bioinformatics and omics datasets across cloud and on-prem environments. Standardize and harmonize diverse data sources, ensuring metadata quality and FAIR compliance. Integrate multi-modal datasets (genomic, transcriptomic More ❯
datasets. You’ll collaborate closely with scientists, bioinformaticians, and ML engineers to deliver robust, compliant, and reusable data solutions that drive research and discovery. Key Responsibilities Develop and maintain ETL pipelines for bioinformatics and omics datasets across cloud and on-prem environments. Standardize and harmonize diverse data sources, ensuring metadata quality and FAIR compliance. Integrate multi-modal datasets (genomic, transcriptomic More ❯