Senior Data Engineer - (Azure/Databricks) page is loaded Senior Data Engineer - (Azure/Databricks) Apply locations London - Scalpel time type Full time posted on Posted 15 Days Ago job requisition id REQ05851 This is your opportunity to join AXIS Capital - a trusted global provider of specialty lines insurance and reinsurance. We stand apart for our outstanding client service, intelligent … family or parental status, or any other characteristic protected by law. Accommodation is available upon request for candidates taking part in the selection process. Senior Data Engineer (Azure/Databricks Job Family Grouping: Chief Underwriting Officer Job Family: Data & Analytics Location: London How does this role contribute to our collective success? The Data & Analytics department transforms raw data into actionable … MDW) technologies, big data, and Lakehouse architectures to ensure our data solutions are secure, efficient, and optimized. Key Responsibilities: Design and implement data solutions using Azure services, including Azure Databricks, ADF, and Data Lake Storage. Develop and maintain ETL/ELT pipelines to process structured and unstructured data from multiple sources. - Automate loads using Databricks workflows and Jobs Develop, test More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
new, innovative and sustainable products and solutions, both for today and for a new era of building. To support our progress, we are currently recruiting for a Data Engineer (Databricks & BI) to come and join our team at Ibstock Head Office, LE67 6HS. (Applicants must have Right to work in the UK - We are unable to provide sponsorship for this … role) Job Purpose: The Data Engineer/BI Developer will play a critical role in developing and refining our modern data lakehouse platform, with a primary focus on Databricks for data engineering, transformation, and analytics. The role will involve designing, developing, and maintaining scalable data solutions to support business intelligence and reporting needs. This platform integrates Databricks, Power BI, and … JDE systems to provide near real-time insights for decision-making. Key Accountabilities: Ensure the existing design, development, and expansion of a near real-time data platform integrating AWS, Databricks and on-prem JDE systems. Develop ETL processes to integrate and transform data from multiple sources into a centralised data platform. Develop and optimise complex queries and data pipelines. Optimise More ❯
engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data … RESPONSIBILITIES Data Pipeline Development: Lead the design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Architecture: Architect scalable and efficient data solutions using the appropriate architecture design, opting for modern architectures where possible. Data Modeling: Design and optimize data models and … or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience More ❯
Techyard is supporting a growing Databricks-partnered consultancy in securing a Databricks Champion Solution Architect to lead the design and delivery of advanced data and AI solutions on the Databricks Lakehouse Platform. This strategic, high-impact role is ideal for a seasoned professional who can operate as both a hands-on architect and a trusted advisor—bridging business vision with … the consultancy’s data engineering capabilities as they scale their team and client base. Key Responsibilities: Architect and implement end-to-end, scalable data and AI solutions using the Databricks Lakehouse (Delta Lake, Unity Catalog, MLflow). Design and lead the development of modular, high-performance data pipelines using Apache Spark and PySpark. Champion the adoption of Lakehouse architecture (bronze … Collaborate with stakeholders, analysts, and data scientists to translate business needs into clean, production-ready datasets. Promote CI/CD, DevOps, and data reliability engineering (DRE) best practices across Databricks environments. Integrate with cloud-native services and orchestrate workflows using tools such as dbt, Airflow, and Databricks Workflows. Drive performance tuning, cost optimisation, and monitoring across data workloads. Mentor engineering More ❯
engineers and scientists working across Africa, Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data … data solutions. RESPONSIBILITIES Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data. ETL Processes: Develop and automate ETL workflows … or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink. Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. Knowledge of ETL tools and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Noir
Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to innovation and sustainable solutions. Our client are seeking a talented Data Engineer to help build and optimise our data infrastructure, enabling them to More ❯
Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to innovation and sustainable solutions. Our client are seeking a talented Data Engineer to help build and optimise our data infrastructure, enabling them to More ❯
SAS Viya). An ability to write complex SQL queries. Project experience using one or more of the following technologies: Tableau, Python, Power BI, Cloud (Azure, AWS, GCP, Snowflake, Databricks). Project lifecycle experience, having played a leading role in the delivery of end-to-end projects, as 14, rue Pergolèse, Paris, Francewell as a familiarity with different development methodologies More ❯
We work with some of the UK's biggest companies and government departments to provide a pragmatic approach to technology, delivering bespoke software solutions and expert advice. Our clients are increasingly looking to us to help them make the best More ❯
We work with some of the UK's biggest companies and government departments to provide a pragmatic approach to technology, delivering bespoke software solutions and expert advice. Our clients are increasingly looking to us to help them make the best More ❯
This is an exciting contract opportunity for an SC Cleared Azure Data Engineer with a strong focus on Databricks to join an experienced team in a new customer engagement working at the forefront of data analytics and AI. This role offers the chance to take a key role in the design and delivery of advanced Databricks solutions within the Azure … ecosystem. Responsibilities: Design, build, and optimise end-to-end data pipelines using Azure Databricks, including Delta Live Tables. Collaborate with stakeholders to define technical requirements and propose Databricks-based solutions. Drive best practices for data engineering. Help clients realise the potential of data science, machine learning, and scaled data processing within Azure/Databricks ecosystem. Mentor junior engineers and support … Take ownership for the delivery of core solution components. Support with planning, requirements refinements, and work estimation. Skills & Experiences: Proven experience designing and implementing data solutions in Azure using Databricks as a core platform. Hands-on expertise in Delta Lake, Delta Live Tables and Databricks Workflows. Strong coding skills in Python and SQL, with experience in developing modular, reusable code More ❯
Databricks Architect (Contract) - Greenfield Data Platform Location: Hybrid working (London) Duration: 12-month initial contract Are you a visionary Databricks Architect with a passion for building cutting-edge data platforms from the ground up? Do you thrive on shaping strategy and driving technical excellence in a greenfield environment? Our client is embarking on a pivotal journey to establish a brand … future data-driven initiatives, from advanced analytics to AI/ML. We are looking for a hands-on architect who can translate business needs into robust, scalable, and secure Databricks solutions. The Role: As our Databricks Architect, you will be instrumental in defining and delivering our new data strategy and architecture. This is a greenfield project, meaning you'll have … the exciting challenge of building the entire Databricks Lakehouse Platform from scratch. You will provide critical technical leadership, guidance, and hands-on expertise to ensure the successful establishment of a scalable, high-performance, and future-proof data environment. Phase 1: Strategic Vision & Blueprint Data Strategy & Roadmap: Collaborate with business stakeholders and leadership to define the overarching data vision, strategy, and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Osmii
Databricks Architect (Contract) - Greenfield Data Platform Location: Hybrid working (London) Duration: 12-month initial contract Are you a visionary Databricks Architect with a passion for building cutting-edge data platforms from the ground up? Do you thrive on shaping strategy and driving technical excellence in a greenfield environment? Our client is embarking on a pivotal journey to establish a brand … future data-driven initiatives, from advanced analytics to AI/ML. We are looking for a hands-on architect who can translate business needs into robust, scalable, and secure Databricks solutions. The Role: As our Databricks Architect, you will be instrumental in defining and delivering our new data strategy and architecture. This is a greenfield project, meaning you'll have … the exciting challenge of building the entire Databricks Lakehouse Platform from scratch. You will provide critical technical leadership, guidance, and hands-on expertise to ensure the successful establishment of a scalable, high-performance, and future-proof data environment. Phase 1: Strategic Vision & Blueprint Data Strategy & Roadmap: Collaborate with business stakeholders and leadership to define the overarching data vision, strategy, and More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Osmii
Databricks Architect (Contract) - Greenfield Data Platform Location: Hybrid working (London) Duration: 12-month initial contract Are you a visionary Databricks Architect with a passion for building cutting-edge data platforms from the ground up? Do you thrive on shaping strategy and driving technical excellence in a greenfield environment? Our client is embarking on a pivotal journey to establish a brand … future data-driven initiatives, from advanced analytics to AI/ML. We are looking for a hands-on architect who can translate business needs into robust, scalable, and secure Databricks solutions. The Role: As our Databricks Architect, you will be instrumental in defining and delivering our new data strategy and architecture. This is a greenfield project, meaning you'll have … the exciting challenge of building the entire Databricks Lakehouse Platform from scratch. You will provide critical technical leadership, guidance, and hands-on expertise to ensure the successful establishment of a scalable, high-performance, and future-proof data environment. Phase 1: Strategic Vision & Blueprint Data Strategy & Roadmap: Collaborate with business stakeholders and leadership to define the overarching data vision, strategy, and More ❯
Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing … Development & Optimisation: Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark … or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and More ❯
Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing … Development & Optimisation : Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark … or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and More ❯
Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing … Development & Optimisation : Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform. Optimise data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark … or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services. Implement best practices for Databricks development and deployment. Optimise Databricks workloads for performance and More ❯
design. Experience architecting and building data applications using Azure, specifically a Data Warehouse and/or Data Lake. Technologies : Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks and Power BI. Experience with creating low-level designs for data platform implementations. ETL pipeline development for the integration with data sources and data transformations including the creation of supplementary … with APIs and integrating them into data pipelines. Strong programming skills in Python. Experience of data wrangling such as cleansing, quality enforcement and curation e.g. using Azure Synapse notebooks, Databricks, etc. Experience of data modelling to describe the data landscape, entities and relationships. Experience with data migration from legacy systems to the cloud. Experience with Infrastructure as Code (IaC) particularly More ❯
relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the … relevant technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the More ❯
scenarios that can benefit from AI solutions (Machine Learning and Gen AI). Explore and analyse data from various sources and formats using tools such as Microsoft Fabric, Azure Databricks, Azure Synapse Analytics, and Azure Machine Learning Implement data pipelines and workflows to automate and operationalize machine learning solutions using tools such as Azure ML Pipelines, Azure DevOps. Run experiments … workflows and deploying them at scale using Azure services. Familiarity with data integration tools like Azure Data Factory and Data platform solutions such as Microsoft Fabric and/or Databricks Excellent communication, collaboration, stakeholder management, and problem-solving skills. Familiarity with the Microsoft Copilot stack. Microsoft Certified: Azure Data Scientist Associate certification or AI Engineer is a plus Experience within More ❯
integrating Oracle data sources within Azure environments. Excellent problem-solving and communication skills. Preferred Qualifications Experience with additional Azure services like Azure Data Lake , Azure SQL Database , or Azure Databricks . Familiarity with Infrastructure-as-Code (IaC) practices. Strong knowledge of data governance and security best practices in the cloud. Previous experience in a DevOps environment with CI/CD More ❯
learning and Data science applications Ability to use wide variety of open-source technologies Knowledge and experience using at least one Data Platform Technology such as Quantexa, Palantir and DataBricks Knowledge of test automation frameworks and ability to automate testing within the pipeline To discuss this or wider Technology roles with our recruitment team, all you need to do is More ❯
transformation, and we’re looking for a talented Data Engineer to join our growing data team. In this role, you’ll be instrumental in building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions. You’ll work hands-on with Azure Databricks, Azure Data Factory, Delta Lake, and Power BI to design … data at Zodiac Maritime while working with cutting-edge cloud technologies. Key responsibilities and primary deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with … streaming with Kafka/Event Hubs, Knowledge Graphs). Advocate for best practices in data engineering across the organization. Skills profile Relevant experience & education Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines More ❯
transformation, and we’re looking for a talented Data Engineer to join our growing data team. In this role, you’ll be instrumental in building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions. You’ll work hands-on with Azure Databricks, Azure Data Factory, Delta Lake, and Power BI to design … data at Zodiac Maritime while working with cutting-edge cloud technologies. Key responsibilities and primary deliverables Design, develop, and optimize end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implement Medallion Architecture to structure raw, enriched, and curated data layers efficiently. Build scalable ETL/ELT processes with Azure Data Factory and PySpark. Work with … streaming with Kafka/Event Hubs, Knowledge Graphs). Advocate for best practices in data engineering across the organization. Skills profile Relevant experience & education Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines More ❯
and collaborate with others passionate about solving business problems. Key responsibilities: Data Platform Design and Architecture Design, develop, and maintain a high-performing, secure, and scalable data platform, leveraging Databricks Corporate Lakehouse and Medallion Architectures. Utilise our metadata-driven data platform framework combined with advanced cluster management techniques to create and optimise scalable, robust, and efficient data solutions. Implement comprehensive … multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly … regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS Gen2 encryption for audit compliance. Development and Process Improvement Evaluate requirements, create technical design documentation, and work within Agile methodologies to deploy and optimise data workflows, adhering to data platform policies and standards. Collaboration and Knowledge Sharing Collaborate with stakeholders to develop data solutions, maintain More ❯