spoke data architectures , optimising for performance, scalability, and security. Collaborate with business stakeholders, data engineers, and analytics teams to ensure solutions are fit for purpose. Implement and optimise Databricks DeltaLake, Medallion Architecture, and Lakehouse patterns for structured and semi-structured data. Ensure best practices in Azure networking, security, and federated data access . Key Skills & Experience 5+ More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
Degree in Computer Science, Software Engineering, or similar (applied to Data/Data Specialisation). Extensive experience in Data Engineering, in both Cloud & On-Prem, Big Data and Data Lake environments. Expert knowledge in data technologies, data transformation tools, data governance techniques. Strong analytical and problem-solving abilities. Good understanding of Quality and Information Security principles. Effective communication, ability … monitoring/security is necessary. Significant AWS or Azure hands-on experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, DeltaLake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. The role comes with an extensive benefits package including a good pension … role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, DeltaLake, Data Lake. Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this position. More ❯
London, England, United Kingdom Hybrid / WFH Options
Axiom Software Solutions Limited
intuitively Requirements Required Skills: Languages/Frameworks: JSON YAML Python (as a programming language, not just able to write basic scripts; Pydantic experience would be a bonus) SQL PySpark DeltaLake Bash (both CLI usage and scripting) Git Markdown Scala (bonus, not compulsory) Azure SQL Server as a HIVE Metastore (bonus) Technologies: Azure Databricks Apache Spark DeltaMore ❯
Dunstable, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
architecture Key Skills & Experience 5+ years of hands-on experience delivering data architecture solutions on Microsoft Azure Strong understanding of Azure services including Data Factory, Databricks, Synapse Analytics, Data Lake, DeltaLake, SQL Database, Key Vault , and Power BI Experience in lakehouse architecture design and implementation Proven ability to prepare and present architecture documents Proficient in evaluating More ❯
these intuitively. Required skills: Languages/Frameworks: JSON YAML Python (as a programming language, not just able to write basic scripts; Pydantic experience would be a bonus) SQL PySpark DeltaLake Bash (both CLI usage and scripting) Git Markdown Scala (bonus, not compulsory) Azure SQL Server as a HIVE Metastore (bonus) Technologies: Azure Databricks Apache Spark DeltaMore ❯
Data Platform and Services, you'll not only maintain and optimize our data infrastructure but also spearhead its evolution. Built predominantly on Databricks, and utilizing technologies like Pyspark and DeltaLake, our infrastructure is designed for scalability, robustness, and efficiency. You'll take charge of developing sophisticated data integrations with various advertising platforms, empowering our teams with data … decision-making What you'll be doing for us Leadership in Design and Development : Lead in the architecture, development, and upkeep of our Databricks-based infrastructure, harnessing Pyspark and Delta Lake. CI/CD Pipeline Mastery : Create and manage CI/CD pipelines, ensuring automated deployments and system health monitoring. Advanced Data Integration : Develop sophisticated strategies for integrating data More ❯
aligned with business objectives. Nice to Have Experience supporting AI/ML model training infrastructure (e.g., GPU orchestration, model serving) for both Diffusion- and LLM pipelines. Familiarity with data lake architectures and tools like DeltaLake , LakeFS , or Databricks . Knowledge of security and compliance best practices (e.g., SOC2, ISO 27001). Exposure to MLOps platforms or More ❯
what you do, and what we do Passion for data and experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. DeltaLake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages Data domain and event data More ❯
stakeholders. Expertise in designing and documenting data architectures (e.g., data warehouses, lakehouses, master/reference data models). Hands-on experience with Azure Databricks, including: Workspace and cluster configuration. DeltaLake table design and optimization. Integration with Unity Catalog for metadata management. Proficiency with Unity Catalog, including: Setting up data lineage and governance policies. Managing access controls and More ❯
Analytics, Security, and Software Engineering to define, develop, and deliver impactful data products to both internal stakeholders and end customers. Responsibilities Design and implement scalable data pipelines using Databricks, DeltaLake, and Lakehouse architecture Build and maintain a customer-facing analytics layer, integrating with tools like PowerBI, Tableau, or Metabase Optimise ETL processes and data workflows for performance More ❯
unify and democratize data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, DeltaLake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the More ❯
Data Engineer | up to £450/day Inside | Remote with occasional London travel We are seeking a PySpark Data Engineer to support the development of a modern, scalable data lake for a new strategic programme. This is a greenfield initiative to replace fragmented legacy reporting solutions, offering the opportunity to shape a long-term, high-impact platform from the … ground up. Key Responsibilities: * Design, build, and maintain scalable data pipelines using PySpark 3/4 and Python 3. * Contribute to the creation of a unified data lake following medallion architecture principles. * Leverage Databricks and DeltaLake (Parquet format) for efficient, reliable data processing. * Apply BDD testing practices using Python Behave and ensure code quality with Python … using YAML, Git, and Azure DevOps. Required Skills & Experience: * Proven expertise in PySpark 3/4 and Python 3 for large-scale data engineering. * Hands-on experience with Databricks, DeltaLake, and medallion architecture. * Familiarity with Python Behave for Behaviour Driven Development. * Strong understanding of YAML, code quality tools (e.g. Python Coverage), and CI/CD pipelines. * Knowledge More ❯
journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, DeltaLake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices and continuous improvement Stay curious, keep learning, and help shape … our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, DeltaLake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in Access More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Mars
journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, DeltaLake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices and continuous improvement Stay curious, keep learning, and help shape … our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, DeltaLake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in Access More ❯
london, south east england, united kingdom Hybrid / WFH Options
Mars
journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, DeltaLake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices and continuous improvement Stay curious, keep learning, and help shape … our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, DeltaLake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in Access More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Mars
journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, DeltaLake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices and continuous improvement Stay curious, keep learning, and help shape … our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, DeltaLake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in Access More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Mars
journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, DeltaLake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver insights Drive agile DevOps practices and continuous improvement Stay curious, keep learning, and help shape … our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, DeltaLake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in Access More ❯
Engineer to join their team. You will be joining a team of 45 people, including Data Scientists, ML Engineers and 2 Data Engineers. Monitor, optimise and rebuild ETL/DeltaLake workflows in Databricks. Migrate legacy ingestion jobs to modern, cloud‐native patterns (Azure preferred, some AWS/GCP). Collaborate with scientists to understand study goals—e.g. More ❯
Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, DeltaLake, Azure Data Lake SQL & Warehousing: Strong experience with advanced SQL and database design Bonus Points: Exposure to geospatial data or data science/ML pipelines What More ❯
Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, DeltaLake, Azure Data Lake SQL & Warehousing: Strong experience with advanced SQL and database design Bonus Points: Exposure to geospatial data or data science/ML pipelines What More ❯
LexisNexis Risk Solutions UK Ltd T/a LexisNexis Risk Solutions Group
and scalability. Integrate advanced data engineering techniques and tools to streamline processes. Resolve technical issues as necessary. Stay updated with new technology developments. Requirements Experience in SQL Server, Data Lake (Azure/AWS). Proficiency in Python and Node.js preferred. Extensive modern Data Engineering experience. Knowledge of large-scale data platforms (Databricks, Snowflake) and cloud-native tools (Azure Synapse … software development methodologies including Scrum, Kanban, and Agile. Experience with analytics technologies (Spark, Hadoop, Kafka). Test-driven development experience. Nice to Have Understanding of Clickhouse, Druid, PostgreSQL, Databricks, Delta Share & Delta Lake. Ability to work with complex Patent and Litigation data models. Experience with Pandas & PySpark. Work in a way that works for you We promote a More ❯
stakeholders. Expertise in designing and documenting data architectures (e.g., data warehouses, lakehouses, master/reference data models). Hands-on experience with Azure Databricks, including: Workspace and cluster configuration. DeltaLake table design and optimization. Integration with Unity Catalog for metadata management. Proficiency with Unity Catalog, including: Setting up data lineage and governance policies. Managing access controls and More ❯
stakeholders. Expertise in designing and documenting data architectures (e.g., data warehouses, lakehouses, master/reference data models). Hands-on experience with Azure Databricks, including: Workspace and cluster configuration. DeltaLake table design and optimization. Integration with Unity Catalog for metadata management. Proficiency with Unity Catalog, including: Setting up data lineage and governance policies. Managing access controls and More ❯