london, south east england, United Kingdom Hybrid / WFH Options
Silverdrum
making an impact, this is the role for you. What You’ll Be Doing Own the Data Pipeline : Build and optimize scalable pipelines in Databricks and MySQL , leveraging Python and PySpark for performance. Champion Automation : Develop reusable Python packages, integrate machine learning models, and streamline workflows for maximum efficiency. Lead … the bridge between internal teams and clients, ensuring clarity and alignment on every project. What You’ll Bring Technical Expertise : Proficiency in Python , PySpark , Databricks , and SQL, with hands-on experience in CI/CD and Git workflows. Working with Panda's a big plus. Problem-Solving Skills : A track More ❯
responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark , Databricks , Apache Pulsar , Apache Airflow , Temporal , and Apache Flink , sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and systems … Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark , Databricks , or similar big data platforms for processing large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or More ❯
GenAI and LLM models. Define and enforce best practices for model versioning, reproducibility, and governance. Monitor and troubleshoot production systems to minimize downtime. Utilize Databricks to build and manage data and ML pipelines integrated with GenAI and LLM workflows. Evaluate and integrate state-of-the-art MLOps tools and frameworks … Skilled in cloud platforms (AWS, GCP, Azure) and managed AI/ML services. Hands-on experience with Docker, Kubernetes, and container orchestration. Expertise with Databricks, including ML workflows and data pipeline management. Familiarity with tools like MLflow, DVC, Prometheus, and Grafana for versioning and monitoring. Experience implementing security and compliance More ❯
Newbury, Berkshire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
and deployment effectiveness, including Azure DevOps. Considerable experience designing and building operationally efficient pipelines, utilising core Azure components, such as Azure Data Factory, Azure Databricks and Pyspark etc. Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use. … Strong understanding and or use of unity catalog alongside core databricks functionality to drive metadata management. Strong understanding of cloud economics, including cost management strategies and optimising solutions for customer needs. Experience with infrastructure as code, proficiency using tools such as Terraform to automate and manage cloud infrastructure, ensuring both More ❯
Data Engineer (Remote UK/EU, £60 k) This start-up focusses on healthcare AI and are looking for a Data Engineer to join their team. You will be joining a team of 45 people, including Data Scientists, ML Engineers More ❯
london, south east england, United Kingdom Hybrid / WFH Options
JSS Search
Senior Data Architect (Azure) Insurance/ReInsurance Location: London Hybrid:3 Days a week Position Overview: As a Senior Data Architect, you will be responsible for designing, implementing, and optimising our data architecture/data lakehouse to support cloud-based More ❯
Senior Data Architect (Azure) Insurance/ReInsurance Location: London Hybrid:3 Days a week Position Overview: As a Senior Data Architect, you will be responsible for designing, implementing, and optimising our data architecture/data lakehouse to support cloud-based More ❯
Principal Data Scientist Up to £125,000 London (Hybrid, 3 days onsite per week) Company: A leading marketing and analytics agency is seeking a Lead Data Scientist to develop and deploy end-to-end AI solutions. You'll work on More ❯
Please note that this role requires security clearance at SC level. You must be SC cleared to be considered for the role. Tasks and Responsibilities: Design: Define Data Platform technical architecture by analyzing the requirements. Define technical design for ingestion More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Capgemini
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges More ❯
around architecture and governance Data migration/modernisation Good experience with Data Governance Experience with Data Security Data Streaming Oracle experience Good experience with Databricks Oracle on-prem to Azure What you'll get in return An opportunity to work for a leading consultancy on a rolling contract What you More ❯
london (city of london), south east england, United Kingdom
Hays
around architecture and governance Data migration/modernisation Good experience with Data Governance Experience with Data Security Data Streaming Oracle experience Good experience with Databricks Oracle on-prem to Azure What you'll get in return An opportunity to work for a leading consultancy on a rolling contract What you More ❯
alignment of gaps to enabling technology solutions Experience in designing and governing data platform architectures (e.g. broad understanding of Informatica, Collibra, Ab Initio, Snowflake, Databricks) Appreciation of and interest in attaining end-to-end data skills e.g., data quality, metadata, data-mesh, data security, privacy & compliance Experience with Enterprise/ More ❯
alignment of gaps to enabling technology solutions Experience in designing and governing data platform architectures (e.g. broad understanding of Informatica, Collibra, Ab Initio, Snowflake, Databricks) Appreciation of and interest in attaining end-to-end data skills e.g., data quality, metadata, data-mesh, data security, privacy & compliance Experience with Enterprise/ More ❯
data transformations Strong leadership across both business and technical teams Background in FMCG or manufacturing sectors Familiar with enterprise data platforms including Azure and Databricks Exposure to Salesforce and SAP from a data integration or reporting standpoint Skilled in stakeholder engagement, cultural change, and operating model design Experience handling external More ❯
to IT, Product, Engineering, and Infrastructure teams. Tines was built for everyone, delivering transformative and innovative enterprise software to industry leaders like Canva, Intercom, Databricks, Mars and Reddit. We're excited about what we're doing and what's to come, and we're looking for others who can lead More ❯
I'm seeking a Data Engineer with extensive hands-on experience in Databricks, Azure Data Factory, Python, and DBT. The suitable candidate will develop data models, troubleshoot pipelines, and deliver production-ready code. Effective communication skills and the ability to articulate technical decisions clearly are essential for this role. Additionally … pipelines using Azure Data Factory Automate ETL workflows to support machine learning features Apply best practices for data governance, quality, and performance Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, Delta Lake, PySpark, Spark SQL) Strong proficiency … with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming Excellent stakeholder/client-facing communication abilities Desirable Skills: Databricks certification Experience with Terraform for infrastructure as code Background in financial services or insurance industries Understanding of data governance frameworks Knowledge of best practices in More ❯
regression testing and CI/CD pipelines. Use Python-based frameworks (e.g., PyTest) and integrate with GitLab. Design and implement automated testing frameworks for Databricks pipelines and ETL workflows . Validate Hive Metastore and Unity Catalog configurations to ensure data consistency and security. Perform data quality assurance , including completeness, accuracy … reviews . Collaborate with developers, product owners, and business analysts to understand user stories and define test criteria. Required Skills and Experience: Experience in Databricks, Apache Spark, and Delta Lake testing strategies. Strong knowledge of Python, SQL, and Scala for QA automation . Familiarity with Hive Metastore, Unity Catalog , and … and anomaly detection . Understanding of event-driven architectures and stateful processing in streaming applications . Experience with API testing for external integrations within Databricks pipelines. Knowledge of cloud platforms like Azure, AWS, or Google Cloud in data engineering environments. Experience writing automated and manual test cases. Strong scripting skills More ❯
London, England, United Kingdom Hybrid / WFH Options
Focus on SAP
Data Engineer Employment Type: Contract, Full time Start: ASAP Location: London - Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of Apache Spark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP … oral & written) Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage … across multiple sources. Implement best practices in data governance, data security, and data quality. Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks. Should you be interested in being considered for this position and would like to discuss further. Please apply with your latest CV or share More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Focus on SAP
Data Engineer Employment Type: Contract, Full time Start: ASAP Location: London - Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of Apache Spark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP … oral & written) Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage … across multiple sources. Implement best practices in data governance, data security, and data quality. Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks. Should you be interested in being considered for this position and would like to discuss further. Please apply with your latest CV or share More ❯
strategy for data platforms built on Microsoft Fabric. * Leading QA efforts across agile delivery squads. * Developing automated tests for: - PySpark notebooks in Fabric or Databricks - ETL pipelines and transformation logic - Delta tables and multi-layered Lakehouse architectures * Embedding testing into CI/CD flows using Azure DevOps or GitHub Actions. … data platforms. * Strong working knowledge of: - Microsoft Fabric (Lakehouse's, Pipelines, Notebooks, Power BI) - Azure tools including ADF, Synapse, Data Lake, and Key Vault - Databricks or Microsoft Fabric Notebooks using PySpark * Experience building automation frameworks (e.g., Pytest, Nutter, Great Expectations). * Familiarity with Medallion Architecture and Delta Lake. * Skilled in More ❯
analytics engineers. Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product … You’ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
analytics engineers. Define technical standards and drive excellence in engineering practices. Architect and oversee the development of cloud-native data infrastructure and pipelines using Databricks , Python , PySpark , and Delta Lake . Guide the implementation of embedded analytics, headless APIs, and real-time dashboards for customer-facing platforms. Partner with Product … You’ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
on a hybrid basis. Role Requirements As the DataOps Lead you will design, implement, and maintain automated data pipelines in the Azure cloud environment (Databricks, Data factory, Synapse, SQL). You will develop and manage CI/CD pipelines for data workflows, ensuring seamless deployment and version control. You will … IaC) tools such as Terraform, ARM templates for provisioning and managing Azure resources. You will have experience in Azure cloud services, including Data Factory, Databricks, Synapse Analytics, and Azure Data Lake Storage (ADLS). You will have proficiency in developing and managing CI/CD pipelines using tools like Azure More ❯
Data and AI Platforms and cloud technologies. Our tech stack continues to evolve together with Azure data and AI offering and relies on Azure Databricks and Azure AI pillars. Additionally, AXA XL consumes the wider technology offering from AXA Group, such as managed OpenShift, VM and DevOps platforms. We use … of instituting change. Programming experience - ideally in Python or open to using Python. Familiarity with all, and expert in some of the below: SQL, Databricks or Spark, MPP databases, data warehouse design, feature store design, Kubernetes, orchestration tools, monitoring tools, IaC, Docker, streaming technologies. Well-established experience as a Data More ❯