London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
greenfield MLOps pipelines that handle very large datasets. You will be responsible for building out a greenfield standaridised framework for Capital markets. The core platform is built on Azure Databricks Lakehouse, consolidating data from various front and Middle Office systems to support BI, MI, and advanced AI/ML analytics. As a lead, you will shape the MLOps framework and … across various data sources (orders, quotes, trades, risk, etc.). 2+ years of experience in MLOps and at least 3 years in AI/ML engineering. Knowledge in Azure Databricks and associated services. Proficiency with ML frameworks and libraries in Python. Proven experience deploying and maintaining LLM services and solutions. Expertise in Azure DevOps and GitHub Actions. Familiarity with Databricks … CLI and Databricks Job Bundle. Strong programming skills in Python and SQL; familiarity with Scala is a plus. Solid understanding of AI/ML algorithms, model training, evaluation (including hyperparameter tuning), deployment, monitoring, and governance. Experience in handling large datasets and performing data preparation and integration. Experience with Agile methodologies and SDLC practices. Strong problem-solving, analytical, and communication skills. More ❯
detail, the ability to manage multiple projects, and a collaborative approach to working with regional and group teams. Key Responsibilities: Develop and maintain scalable data solutions on Azure (ADF, Databricks, Synapse, etc.) Maintain and enhance the integrity of the Data Lakehouse in partnership with regional and group data teams. Partner with analytical and reporting teams to ensure data is presented … evolution Support delivery of data models, APIs, ML integrations, and reporting tools Key Skills: Hands on experience designing and delivering solutions using Azure services including Azure Data Factory, Azure Databricks, Azure Synapse, Azure Storage, Azure DevOps. Proven track record in Data Engineering and supporting the business to gain true insight from data. Experience in data integration and modelling including ELT More ❯
London, England, United Kingdom Hybrid / WFH Options
Artefact
engineering and a proven track record of leading data projects in a fast-paced environment. Key Responsibilities Design, build, and maintain scalable and robust data pipelines using SQL, Python, Databricks, Snowflake, Azure Data Factory, AWS Glue, Apache Airflow and Pyspark. Lead the integration of complex data systems and ensure consistency and accuracy of data across multiple platforms. Implement continuous integration … engineering with a strong technical proficiency in SQL, Python, and big data technologies. Extensive experience with cloud services such as Azure Data Factory and AWS Glue. Demonstrated experience with Databricks and Snowflake. Solid understanding of CI/CD principles and DevOps practices. Proven leadership skills and experience managing data engineering teams. Strong project management skills and the ability to lead … Strong communication and interpersonal skills. Excellent understanding of data architecture including data mesh, data lake and data warehouse. Preferred Qualifications: Certifications in Azure, AWS, or similar technologies. Certifications in Databricks, Snowflake or similar technologies Experience in the leading large scale data engineering projects Working Conditions This position may require occasional travel. Hybrid work arrangement: two days per week working from More ❯
Python (PySpark) . Ingest, transform, and curate data from multiple sources into Azure Data Lake and Delta Lake formats. Build and optimize datasets for performance and reliability in Azure Databricks . Collaborate with analysts and business stakeholders to translate data requirements into robust technical solutions. Implement and maintain ETL/ELT pipelines using Azure Data Factory or Synapse Pipelines . … Monitor and troubleshoot production jobs and processes. Preferred Skills & Experience: Strong proficiency in SQL for data transformation and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensional modelling , and data architecture . Experience working with Delta Lake and large-scale More ❯
design. Experience architecting and building data applications using Azure, specifically a Data Warehouse and/or Data Lake. Technologies : Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks and Power BI. Experience with creating low-level designs for data platform implementations. ETL pipeline development for the integration with data sources and data transformations including the creation of supplementary … with APIs and integrating them into data pipelines. Strong programming skills in Python. Experience of data wrangling such as cleansing, quality enforcement and curation e.g. using Azure Synapse notebooks, Databricks, etc. Experience of data modelling to describe the data landscape, entities and relationships. Experience with data migration from legacy systems to the cloud. Experience with Infrastructure as Code (IaC) particularly More ❯
schema design. Experience architecting and building data applications using Azure, specifically Data Warehouse and/or Data Lake. Technologies: Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks, and Power BI. Experience creating low-level designs for data platform implementations. ETL pipeline development for data source integration and transformations, including documentation. Proficiency working with APIs and integrating them … into data pipelines. Strong programming skills in Python. Experience with data wrangling such as cleansing, quality enforcement, and curation (e.g., Azure Synapse notebooks, Databricks). Data modeling experience to describe data landscape, entities, and relationships. Experience migrating data from legacy systems to the cloud. Experience with Infrastructure as Code (IaC), particularly Terraform. Proficiency in developing Power BI dashboards. Strong focus More ❯
and interest in at least four others: SQL Python Power BI/Analysis Services/DAX Data Modelling/Data Warehouse Theory Azure Fundamentals Additional desirable skills include Azure Databricks, Synapse Analytics, Data Factory, DevOps, MSBI stack, PowerShell, Azure Functions, PowerApps, Data Science, and Azure AI services. Certifications such as Databricks Certified Associate/Professional and Microsoft Azure Certifications are More ❯
more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we’re still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and … solutions. Job Accountabilities · Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. · Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. · Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploymachine learning models and algorithms aimed at addressing specific … workflow and knowledge of when and how to use dedicated hardware. · Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) · Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. · Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez More ❯
more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we’re still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and … solutions. Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at addressing … workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez More ❯
more than 90 million passengers this year, we employ over 10,000 people. Its big-scale stuff and we’re still growing. Job Purpose With a big investment into Databricks, and with a large amount of interesting data, this is the chance for you to come and be part of an exciting transformation in the way we store, analyse and … solutions. Job Accountabilities Develop robust, scalable data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning models and algorithms aimed at addressing … workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt data quality, OpenLineage or Marquez More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
forward-thinking organization using data to drive innovation and business performance. They’re expanding their team and are looking for a talented Data Engineer with experience in Azure and Databricks to join the team. Salary and Benefits £55,000 – £65,000 salary depending on experience 10% performance-related bonus Hybrid working model – 2 days in the Greater Manchester office Supportive … do I need to apply for the role: Solid hands-on experience with Azure data tools, including Data Factory, Data Lake, Synapse Analytics, and Azure SQL. Strong proficiency with Databricks, including Spark, Delta Lake, and notebooks. Skilled in Python and SQL for data transformation and processing. Experience with Git and modern CI/CD workflows. Strong analytical mindset and effective More ❯
required data migrations from on-premises or 3 rd party hosted databases/repositories Build and support data-pipelines using ETL tools such as MS Azure Data Factory and Databricks Design and manage a standard access method to both cloud and on-premise data sources for use in data visualisation and reporting (predominantly using Microsoft Power Bi) Own and build … and SSRS Understanding of local government services Your knowledge and certifications: 2 years + working with Azure data engineering tools e.g: Azure Data factory Azure Synapse Azure SQL Azure DataBricks Microsoft Fabric Azure data lake Exposure to other data engineering and storage tools: Snowflake AWS tools – Kinesis/Glue/Redshift Google tools – BigQuery/Looker Experience working with open More ❯
data quality solutions. The ideal candidate should have strong expertise in ETL framework testing (preferably Talend or DataStage), BI report testing (preferably Power BI, Cognos), cloud technologies (preferably Azure, Databricks), SQL/PLSQL coding, and Unix/Python scripting. Key Responsibilities Lead and mentor a team of test engineers, assisting them with technical challenges and guiding them on best testing … frameworks. Understanding of DataOps & TestOps concepts for continuous data quality testing and automation. Experience validating unstructured data formats, including XML, JSON, Parquet. Knowledge of cloud data platforms like Azure, Databricks for data processing and analytics. In addition to our open door policy and supportive work environment, we also strive to provide a competitive salary and benefits package. TJX considers all More ❯
Lutterworth, England, United Kingdom Hybrid / WFH Options
PharmaLex
database. Collaborate with Data Analysts and Scientists to optimise data quality, reliability, security, and automation. Skills & Responsibilities: Core responsibility will be using the NHS Secure Data Environment which utilises databricks, to design and extract regular datasets. Configure and troubleshoot Microsoft Azure, manage data ingestion using LogicApps and Data Factory. Develop ETL scripts using MS SQL, Python, handle web scraping, APIs More ❯
ll play a key role in shaping the data strategy, enhancing platform capabilities, and supporting business intelligence initiatives. Key Responsibilities Design and develop Azure data pipelines using Data Factory, Databricks, and related services. Implement and optimize ETL processes for performance, reliability, and cost-efficiency. Build scalable data models and support analytics and reporting needs. Design and maintain Azure-based data More ❯
West Midlands, England, United Kingdom Hybrid / WFH Options
Amtis - Digital, Technology, Transformation
ll play a key role in shaping the data strategy, enhancing platform capabilities, and supporting business intelligence initiatives. Key Responsibilities Design and develop Azure data pipelines using Data Factory, Databricks, and related services. Implement and optimize ETL processes for performance, reliability, and cost-efficiency. Build scalable data models and support analytics and reporting needs. Design and maintain Azure-based data More ❯
MS or Ph.D. in relevant fields Understanding of marketing ecosystem and measurement frameworks Experience with data pipelines from sources like RedShift, SQLServer, Salesforce, Adobe Analytics Experience with AWS, Citrix, Databricks, Airflow, PySpark, web scraping, A/B testing, MLFlow, Dash, FastAPI, NLP, Computer Vision, GenAI, feature engineering Key attributes Attention to detail, curiosity, proactivity Strong communication skills About RAPP We More ❯
London, England, United Kingdom Hybrid / WFH Options
Noir
the Microsoft community for 15 years; helping end user clients and partners engage the best permanent and contract Microsoft... Data Engineer - Leading Fashion Company - London (Tech Stack: Data Engineer, Databricks, Python, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) We’re recruiting on behalf of a leading fashion brand based in London that’s recognised for combining creativity with More ❯
London, England, United Kingdom Hybrid / WFH Options
ScanmarQED
Experience: 3–5 years in Data Engineering, Data Warehousing, or programming within a dynamic (software) project environment. Data Infrastructure and Engineering Foundations: Data Warehousing: Knowledge of tools like Snowflake, DataBricks, ClickHouse and traditional platforms like PostgreSQL or SQL Server. ETL/ELT Development: Expertise in building pipelines using tools like Apache Airflow, dbt, Dagster. Cloud providers: Proficiency in Microsoft Azure More ❯
data and analytics, unlocking quality growth and operational excellence. What are we looking for? Hands-on experience designing greenfield, scalable data platforms in cloud using Azure D&A stack, Databricks, and Azure Open AI solutions. Proficiency in coding (Python, PL/SQL, Shell Script), relational and non-relational databases, ETL tooling (such as Informatica), and scalable data platforms. Proficiency in More ❯
from various sources including APIs, structured/unstructured files, IoT devices, and real-time streams Develop and optimize ETL/ELT workflows using tools such as Azure Data Factory, Databricks, and Apache Spark Implement real-time data ingestion and processing using Azure Stream Analytics, Event Hubs, or Kafka Ensure data quality, availability, and security across the entire data lifecycle Collaborate More ❯
Maidstone, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Analytics Engineer Associate. Responsibilities: Your daily tasks will include, but are not limited to: Designing, building, and optimizing high-performance data pipelines and ETL workflows using Azure Synapse, Azure Databricks, or Microsoft Fabric. Implementing scalable solutions for data ingestion, storage, and transformation, ensuring data quality and availability. Writing clean, efficient, reusable Python code for data engineering tasks. Collaborating with data More ❯
Nottingham, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Associate. Responsibilities: Your daily tasks will include, but are not limited to: Designing, building, and optimizing high-performance data pipelines and ETL workflows using tools like Azure Synapse, Azure Databricks, or Microsoft Fabric. Implementing scalable solutions to ingest, store, and transform large datasets, ensuring data quality and availability. Writing clean, efficient, and reusable Python code for data engineering tasks. Collaborating More ❯
Stockport, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Analytics Engineer Associate. Responsibilities Daily responsibilities include, but are not limited to: Designing, building, and optimizing high-performance data pipelines and ETL workflows using tools like Azure Synapse, Azure Databricks, or Microsoft Fabric. Implementing scalable solutions for data ingestion, storage, and transformation, ensuring data quality and availability. Writing clean, efficient, and reusable Python code for data engineering tasks. Collaborating with More ❯
Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: Apache Spark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API Gateway, or MuleSoft. Middleware: Red Hat Fuse or Kafka for asynchronous communication. Providers: AWS (EC2, Lambda), Azure (AKS), or Google More ❯