our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
london, south east england, united kingdom Hybrid / WFH Options
Axis Capital
our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test and build CI/CD pipelines using Azure DevOps More ❯
implementing data engineering best-practice (e.g., source-to-target mappings, coding standards, data quality, etc.), working closely with the external party who setup the environment . Create and maintain ETL processes, data mappings & transformations to orchestrate data integrations. Ensure data integrity, quality, privacy, and security across systems, in line with client and regulatory requirements. Optimize data solutions for performance and … up monitoring and data quality exception handling. Strong data modelling experience. Experience managing and developing CI/CD pipelines. Experience with Microsoft Azure products and services, and proficiency in ETL processes. Experience of working with APIs to integrate data flows between disparate cloud systems. Strong analytical and problem-solving skills, with the ability to work independently and collaboratively. The aptitude More ❯
appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop, optimize and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Stay current with emerging technologies and best practices in data engineering, cloud architecture, and DevOps. Mentoring … and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as Apache Airflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS More ❯
closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for … sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, andload it into data warehouses, data lakes or lakehouses. Big Data Technologies … teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing. QUALIFICATIONS Proven experience as More ❯
Technical Business Analysis experience. A proactive awareness of industry standards, regulations, and developments. Ideally, you'll also have: Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building More ❯
Serve as a subject matter expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data More ❯
Serve as a subject matter expert in cloud data engineering providing technical guidance and mentorship to the team. Drive the design development and implementation of complex data pipelines andETL/ELT processes using cloud-native technologies (e.g. AWS Glue AWS Lambda AWS S3 AWS Redshift AWS EMR). Develop and maintain data quality checks data validation rules and data More ❯
Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras More ❯
Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras More ❯
with senior stakeholders. Architect & Build Scalable Data Solutions Collaborate closely with senior product stakeholders to understand data needs and architect end-to-end ingestion pipelines Design and build robust ETL/ELT processes and data architectures using modern tools and techniques Lead database design, data modelling, and integration strategies to support analytics at scale Drive Data Integration & Management Design and … of software engineering best practices - code reviews, testing frameworks, CI/CD, and code maintainability Experience deploying applications into production environments, including packaging, monitoring, and release management Ability to extract insights from complex and disparate data sets and communicate clearly with stakeholders Hands-on experience with cloud platforms such as AWS, Azure, or GCP Familiarity with traditional ETL tools (e.g. More ❯
CI/CD pipelines. Data Modelling: Apply deep hands-on expertise in enterprise data modelling using tools like ERwin, ER/Studio, or PowerDesigner, ensuring scalability, performance, and maintainability. ETL/ELT Frameworks: Design and build robust data pipelines with Cloud Composer, Dataproc, Dataflow, Informatica, or IBM DataStage, supporting both batch and streaming data ingestion. Data Governance & Quality: Implement data More ❯
london, south east england, united kingdom Hybrid / WFH Options
EXL
CI/CD pipelines. Data Modelling: Apply deep hands-on expertise in enterprise data modelling using tools like ERwin, ER/Studio, or PowerDesigner, ensuring scalability, performance, and maintainability. ETL/ELT Frameworks: Design and build robust data pipelines with Cloud Composer, Dataproc, Dataflow, Informatica, or IBM DataStage, supporting both batch and streaming data ingestion. Data Governance & Quality: Implement data More ❯
Senior Data Engineer We are seeking a highly skilled Data Engineer to focus on maintaining data streams andETL pipelines within a cloud-based environment. The ideal candidate will have experience in building, monitoring, and optimizing data pipelines, ensuring data consistency, and proactively collaborating with upstream and downstream teams to enable seamless data flow across the organization. In this role … between 25%-50% of the time per month at the client's office, which is located in London. Key Responsibilities: Data Pipeline Development & Maintenance Build, maintain, and optimize scalable ETL/ELT pipelines using tools such as Dagster, or similar. Ensure high data availability, reliability, and consistency through rigorous data validation and monitoring practices. Collaborate with cross-functional teams to More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
intelligence and reporting tools like Tableau, PowerBI or similar. Experience with version control systems (e.g. Git) Ability to work in an Agile environment Experience with Microsoft SQL. Experience with ETL Tools and Data Migration. Experience with Data Analysis, Data mapping and UML. Experience with programming languages (Python, Ruby, C++, PHP, etc). The ability to work with large datasets across More ❯
data modeling, DAX, report design). Experience with Azure Data Factory and/or Microsoft Fabric for pipeline development (or python pipeline development) Understanding of data warehouse design andETL/ELT best practices Strong communication and stakeholder engagement skills. Customer service mindset with integrity, professionalism and confidentiality. Self-motivated, diligent, and results oriented. Willingness to learn and grow in More ❯
quality Excellent written and verbal communicationskills in English Preferred qualifications, capabilities and skills: Experience in working in a highly regulated environment/industry Understanding of data warehousing concepts andETL processes Experience in data analysis using Python programming language Understanding of data governance frameworks Understanding of AWS cloud technologies More ❯
on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and infrastructure-as-code practices Familiarity with data visualisation tools (Tableau, PowerBI, Looker) and analytics frameworks Leadership & Communication Proven experience leading technical More ❯
optimise end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Support data governance initiatives using tools like Azure Purview and Unity Catalog for metadata management, lineage, and access control. Ensure consistency … Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL, with advanced query optimisation skills. Proven experience building scalable ETL pipelines and managing data transformations. Familiarity with data quality frameworks and monitoring tools. Experience working with Git, CI/CD pipelines, and in Agile environments. Ability to write clean, maintainable More ❯
Essential Skills & Experience): Proven Data Engineering Expertise: Demonstrable experience designing, building, and maintaining complex data pipelines in a production environment. Strong Technical Foundation: Expert-level SQL and proficiency in ETL principals. We currently use SQLSvr/SSIS, but are on a transformation journey of our data platform (AWS) Cloud Proficiency: Hands-on experience with at least one major cloud platform … AWS, Azure, or GCP) and its core data services (e.g., S3, Redshift, Lambda/Functions, Glue). Data Modelling: Deep understanding of ELT/ETL patterns, and data modelling techniques. CRM/Customer Data Focus: Experience working directly with data from CRM systems (e.g., Salesforce, Dynamics 365, Hubspot) and understanding customer data structures. Leadership Potential: Experience leading projects or mentoring More ❯
Gold layers), working with modern tools such as Databricks , dbt , Azure Data Factory , and Python/SQL to support critical business analytics and AI/ML initiatives. Key Responsibilities ETL Development : Design and build robust and reusable ETL/ELT pipelines through the Medallion architecture in Databricks . Data Transformation : Create and manage data models and transformations using dbt , ensuring More ❯
Wandsworth, Greater London, UK Hybrid / WFH Options
Datatech
following product/solution development lifecycles using frameworks/methodologies such as Agile, SAFe, DevOps and use of associated tooling (e.g., version control, task tracking). Demonstrable experience writing ETL scripts and code to make sure the ETL processes perform optimally. Experience in other programming languages for data manipulation (e.g., Python, Scala). Extensive experience of data engineering and the More ❯
london, south east england, united kingdom Hybrid / WFH Options
Datatech
following product/solution development lifecycles using frameworks/methodologies such as Agile, SAFe, DevOps and use of associated tooling (e.g., version control, task tracking). Demonstrable experience writing ETL scripts and code to make sure the ETL processes perform optimally. Experience in other programming languages for data manipulation (e.g., Python, Scala). Extensive experience of data engineering and the More ❯