implementing data engineering best-practice (e.g., source-to-target mappings, coding standards, data quality, etc.), working closely with the external party who setup the environment . Create and maintain ETL processes, data mappings & transformations to orchestrate data integrations. Ensure data integrity, quality, privacy, and security across systems, in line with client and regulatory requirements. Optimize data solutions for performance and … up monitoring and data quality exception handling. Strong data modelling experience. Experience managing and developing CI/CD pipelines. Experience with Microsoft Azure products and services, and proficiency in ETL processes. Experience of working with APIs to integrate data flows between disparate cloud systems. Strong analytical and problem-solving skills, with the ability to work independently and collaboratively. The aptitude More ❯
appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop, optimize and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Stay current with emerging technologies and best practices in data engineering, cloud architecture, and DevOps. Mentoring … and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS More ❯
closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
associated roadmaps towards unlocking these in alignment with the analytics product development strategy and priorities. The role bridges str ategy, operations, and governance, focusing on enabling the organization to extract value from data while maintaining compliance and quality standards. What will be your Key Responsibilities? Strategic Data Management Execute the vision, strategy, and roadmap for the assigned data domain in … Qualifications Strong expertise in data management, data integration, and data engineering best practices. Proficiency in SQL, Python, or other data-focused programming languages. Experience with big data technologies andETL processes. Project management skills with the ability to operate in Agile framework. Good communication skills, with a track record of aligning technical solutions to business needs. Knowledge of data governance More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
and Databricks. Proficiency in working with the cloud environment and various platforms, including Azure, SQL Server. NoSQL databases is good to have. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with DataOps practices and tools, including CI/CD for data pipelines. Experience in medallion data architecture and other More ❯
Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras More ❯
with senior stakeholders. Architect & Build Scalable Data Solutions Collaborate closely with senior product stakeholders to understand data needs and architect end-to-end ingestion pipelines Design and build robust ETL/ELT processes and data architectures using modern tools and techniques Lead database design, data modelling, and integration strategies to support analytics at scale Drive Data Integration & Management Design and … of software engineering best practices - code reviews, testing frameworks, CI/CD, and code maintainability Experience deploying applications into production environments, including packaging, monitoring, and release management Ability to extract insights from complex and disparate data sets and communicate clearly with stakeholders Hands-on experience with cloud platforms such as AWS, Azure, or GCP Familiarity with traditional ETL tools (e.g. More ❯
Senior Data Engineer We are seeking a highly skilled Data Engineer to focus on maintaining data streams andETL pipelines within a cloud-based environment. The ideal candidate will have experience in building, monitoring, and optimizing data pipelines, ensuring data consistency, and proactively collaborating with upstream and downstream teams to enable seamless data flow across the organization. In this role … between 25%-50% of the time per month at the client's office, which is located in London. Key Responsibilities: Data Pipeline Development & Maintenance Build, maintain, and optimize scalable ETL/ELT pipelines using tools such as Dagster, or similar. Ensure high data availability, reliability, and consistency through rigorous data validation and monitoring practices. Collaborate with cross-functional teams to More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data and analytics needs. Design and deploy end-to-end data solutions using Microsoft Fabric, encompassing data ingestion, transformation, and visualisation workflows. Construct and refine data models, pipelines, andETL frameworks within the Fabric ecosystem. Leverage Fabric's suite of tools to build dynamic reports, dashboards, and analytical applications. Maintain high standards of data integrity, consistency, and system performance across More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
data modeling, DAX, report design). Experience with Azure Data Factory and/or Microsoft Fabric for pipeline development (or python pipeline development) Understanding of data warehouse design andETL/ELT best practices Strong communication and stakeholder engagement skills. Customer service mindset with integrity, professionalism and confidentiality. Self-motivated, diligent, and results oriented. Willingness to learn and grow in More ❯
quality Excellent written and verbal communicationskills in English Preferred qualifications, capabilities and skills: Experience in working in a highly regulated environment/industry Understanding of data warehousing concepts andETL processes Experience in data analysis using Python programming language Understanding of data governance frameworks Understanding of AWS cloud technologies More ❯
on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data visualisation tools (Tableau, PowerBI, Looker) and analytics frameworks Leadership & Communication Proven experience leading technical More ❯
optimise end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Support data governance initiatives using tools like Azure Purview and Unity Catalog for metadata management, lineage, and access control. Ensure consistency … Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL, with advanced query optimisation skills. Proven experience building scalable ETL pipelines and managing data transformations. Familiarity with data quality frameworks and monitoring tools. Experience working with Git, CI/CD pipelines, and in Agile environments. Ability to write clean, maintainable More ❯
Essential Skills & Experience): Proven Data Engineering Expertise: Demonstrable experience designing, building, and maintaining complex data pipelines in a production environment. Strong Technical Foundation: Expert-level SQL and proficiency in ETL principals. We currently use SQLSvr/SSIS, but are on a transformation journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform … AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from CRM systems (e.g., Salesforce, Dynamics 365, Hubspot) and understanding customer data structures. Leadership Potential: Experience leading projects or mentoring More ❯
Gold layers), working with modern tools such as Databricks , dbt , Azure Data Factory , and Python/SQL to support critical business analytics and AI/ML initiatives. Key Responsibilities ETL Development : Design and build robust and reusable ETL/ELT pipelines through the Medallion architecture in Databricks . Data Transformation : Create and manage data models and transformations using dbt , ensuring More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
with relational SQL databases either on premises or in the cloud. Power platform experience is desirable. Experience delivering multiple solutions using key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. A solid understanding of key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices andMore ❯
with relational SQL databases either on premises or in the cloud. Power platform experience is desirable. Experience delivering multiple solutions using key techniques such as Governance, Architecture, Data Modelling, ETL/ELT, Data Lakes, Data Warehousing, Master Data, and BI. A solid understanding of key processes in the engineering delivery cycle including Agile and DevOps, Git, APIs, Containers, Microservices andMore ❯
with data visualization tools such as Looker, ThoughtSpot, or Tableau. Possess highly proficient SQL skills for querying and manipulating large, complex datasets. Exhibit strong experience with data warehousing concepts, ETL/ELT processes, and various database technologies. Show knowledge in a relevant data engineering programming language like Python, Bash, or Django. Hold practical experience working with major cloud platforms, including More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based More ❯
environments. Database Design & Optimisation : Design and optimise complex SQL queries, and relational databases (e.g., Amazon Redshift, PostgreSQL, MySQL) to enable fast, efficient data retrieval and analytics. Data Transformation : Apply ETL/ELT processes to transform raw financial data into usable insights for business intelligence, reporting, and predictive analytics. Collaboration with Teams : Work closely with platform team, data analysts, and business … flow and operation. Requirements Several years of experience in data engineering, preferably in the financial services or similar regulated industries. Strong understanding of data engineering concepts, including data modelling, ETL/ELT processes, and data warehousing. Proven experience with AWS services (e.g., S3, Redshift, Lambda, ECS, ECR, SNS, Eventbridge, CloudWatch, Athena etc.) for building and maintaining scalable data solutions in … the cloud. Technical Skills (must have): Python: Proficient in Python for developing custom ETL solutions, data processing, and integration with cloud platforms. Terraform: Experience with Terraform to manage infrastructure as code, ensuring scalable and repeatable cloud environment provisioning. SQL: Advanced proficiency in SQL for querying and optimising relational databases Version Control: Experience with GitHub for managing code, reviewing pull requests More ❯