Haywards Heath, Sussex, United Kingdom Hybrid / WFH Options
First Central Services
all Data pipelines, promoting self-testing pipelines that proactively identify processing issues or discrepancies. You'll build solutions that pipe transform data into data lake storage areas, physical database models and reporting structures across data lake, data warehouse, business intelligence systems and analytics applications. You'll build physical … Kanban). Experience of Databricks solutions, Databricks administration and pyspark. Data Factory/Synapse Workspace - for building data pipelines or synapse analytics pipelines. Data Lake - DeltaLake design pattern implementation experience in Azure Data Lake Gen2 with file hierarchy namespace and low-level permissions management. Synapse … tests for data pipelines. Data Architecture - Knowledge or experience in implementing, Kimball style Data Warehouse. Experience in building Metadata with Azure Purview or Data Lake Gen2. Data Quality - Experience in applying Data Quality rules within Azure Data Flow Activities. Data Transformation - Extensive hands-on with Azure Data Flow Activities More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
all Data pipelines, promoting self-testing pipelines that proactively identify processing issues or discrepancies. You'll build solutions that pipe transform data into data lake storage areas, physical database models and reporting structures across data lake, data warehouse, business intelligence systems and analytics applications. You'll build physical … Kanban). Experience of Databricks solutions, Databricks administration and pyspark. Data Factory/Synapse Workspace - for building data pipelines or synapse analytics pipelines. Data Lake - DeltaLake design pattern implementation experience in Azure Data Lake Gen2 with file hierarchy namespace and low-level permissions management. Synapse … tests for data pipelines. Data Architecture - Knowledge or experience in implementing, Kimball style Data Warehouse. Experience in building Metadata with Azure Purview or Data Lake Gen2. Data Quality - Experience in applying Data Quality rules within Azure Data Flow Activities. Data Transformation - Extensive hands-on with Azure Data Flow Activities More ❯
Haywards Heath, West Sussex, UK Hybrid / WFH Options
First Central Services
all Data pipelines, promoting self-testing pipelines that proactively identify processing issues or discrepancies. You’ll build solutions that pipe transform data into data lake storage areas, physical database models and reporting structures across data lake, data warehouse, business intelligence systems and analytics applications. You’ll build physical … Kanban). Experience of Databricks solutions, Databricks administration and pyspark. Data Factory/Synapse Workspace – for building data pipelines or synapse analytics pipelines. Data Lake – DeltaLake design pattern implementation experience in Azure Data Lake Gen2 with file hierarchy namespace and low-level permissions management. Synapse … tests for data pipelines. Data Architecture – Knowledge or experience in implementing, Kimball style Data Warehouse. Experience in building Metadata with Azure Purview or Data Lake Gen2. Data Quality – Experience in applying Data Quality rules within Azure Data Flow Activities. Data Transformation – Extensive hands-on with Azure Data Flow Activities More ❯
all Data pipelines, promoting self-testing pipelines that proactively identify processing issues or discrepancies. You’ll build solutions that pipe transform data into data lake storage areas, physical database models and reporting structures across data lake, data warehouse, business intelligence systems and analytics applications. You’ll build physical … Kanban). Experience of Databricks solutions, Databricks administration and pyspark. Data Factory/Synapse Workspace – for building data pipelines or synapse analytics pipelines. Data Lake – DeltaLake design pattern implementation experience in Azure Data Lake Gen2 with file hierarchy namespace and low-level permissions management. Synapse … tests for data pipelines. Data Architecture – Knowledge or experience in implementing, Kimball style Data Warehouse. Experience in building Metadata with Azure Purview or Data Lake Gen2. Data Quality – Experience in applying Data Quality rules within Azure Data Flow Activities. Data Transformation – Extensive hands-on with Azure Data Flow Activities More ❯
Daliburgh, Isle Of South Uist, United Kingdom Hybrid / WFH Options
First Central Services
all Data pipelines, promoting self-testing pipelines that proactively identify processing issues or discrepancies. You'll build solutions that pipe transform data into data lake storage areas, physical database models and reporting structures across data lake, data warehouse, business intelligence systems and analytics applications. You'll build physical … Kanban). Experience of Databricks solutions, Databricks administration and pyspark. Data Factory/Synapse Workspace - for building data pipelines or synapse analytics pipelines. Data Lake - DeltaLake design pattern implementation experience in Azure Data Lake Gen2 with file hierarchy namespace and low-level permissions management. Synapse … tests for data pipelines. Data Architecture - Knowledge or experience in implementing, Kimball style Data Warehouse. Experience in building Metadata with Azure Purview or Data Lake Gen2. Data Quality - Experience in applying Data Quality rules within Azure Data Flow Activities. Data Transformation - Extensive hands-on with Azure Data Flow Activities More ❯
warehousing, data integration, and data governance. Databricks Expertise: They have hands-on experience with the Databricks platform, including its various components such as Spark, DeltaLake, MLflow, and Databricks SQL. They are proficient in using Databricks for various data engineering and data science tasks. Cloud Platform Proficiency: They … big data technologies and cloud computing, specifically Azure (minimum 3+ years hands-on experience with Azure data services). Strong experience with Azure Databricks, DeltaLake, and other relevant Azure services. Active Azure Certifications: At least one of the following is required: Microsoft Certified: Azure Data Engineer Associate More ❯
clients and internal teams to deliver scalable, efficient data solutions tailored to business needs. Key Responsibilities Develop ETL/ELT pipelines with Databricks and DeltaLake Integrate and process data from diverse sources Collaborate with data scientists, architects, and analysts Optimize performance and manage Databricks clusters Build cloud … pipelines Document architecture and processes What We’re Looking For Required: 3+ years in data engineering with hands-on Databricks experience Proficient in Databricks, DeltaLake, Spark, Python, SQL Cloud experience (Azure preferred, AWS/GCP a plus) Strong problem-solving and communication skills Preferred: Experience with MLflow More ❯
Apply protocols and standards to secure clinical data in-motion and at-rest. Shape Data Workflows : Use your expertise with Databricks components such as DeltaLake, Unity Catalog, and ML Flow to ensure our data workflows are efficient, secure, and reliable. Key Responsibilities Data Engineering with Databricks : Utilize … ETL/ELT processes, and data lakes to support data analytics and machine learning. Requirements Expertise in Databricks : Proficiency with Databricks components such as DeltaLake, Unity Catalog, and ML Flow. Azure Data Factory Knowledge : Experience with Azure Data Factory for data orchestration. Clinical Data Security : Understanding of More ❯
Apply protocols and standards to secure clinical data in-motion and at-rest. Shape Data Workflows: Use your expertise with Databricks components such as DeltaLake, Unity Catalog, and ML Flow to ensure our data workflows are efficient, secure, and reliable. Key Responsibilities Data Engineering with Databricks: Utilize … ETL/ELT processes, and data lakes to support data analytics and machine learning. Requirements Expertise in Databricks: Proficiency with Databricks components such as DeltaLake, Unity Catalog, and ML Flow. Azure Data Factory Knowledge: Experience with Azure Data Factory for data orchestration. Clinical Data Security: Understanding of More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
trg.recruitment
London, Hybrid 💰 Rate: Up to £600 per day 📆 Contract: 6 months (Outside IR35, potential to go perm) 🛠 Tech Stack: Azure Data Factory, Synapse, Databricks, DeltaLake, PySpark, Python, SQL, Event Hub, Azure ML, MLflow We’ve partnered with a new AI-first professional services consultancy that’s taking … supporting team capability development What You Need: ✔ 5+ years in data engineering or backend cloud development ✔ Strong Python, SQL, and Databricks skills (especially PySpark & DeltaLake) ✔ Deep experience with Azure: Data Factory, Synapse, Event Hub, Azure Functions ✔ Understanding of MLOps tooling like MLflow and integration with AI pipelines More ❯
london, south east england, united kingdom Hybrid / WFH Options
trg.recruitment
London, Hybrid 💰 Rate: Up to £600 per day 📆 Contract: 6 months (Outside IR35, potential to go perm) 🛠 Tech Stack: Azure Data Factory, Synapse, Databricks, DeltaLake, PySpark, Python, SQL, Event Hub, Azure ML, MLflow We’ve partnered with a new AI-first professional services consultancy that’s taking … supporting team capability development What You Need: ✔ 5+ years in data engineering or backend cloud development ✔ Strong Python, SQL, and Databricks skills (especially PySpark & DeltaLake) ✔ Deep experience with Azure: Data Factory, Synapse, Event Hub, Azure Functions ✔ Understanding of MLOps tooling like MLflow and integration with AI pipelines More ❯
meaningful. 🛠️ What you'll do: Lead and mentor a junior-heavy team of data engineers Build and scale robust pipelines using Spark, Kafka and DeltaLake Define test-driven, documented and repeatable engineering practices Work closely with AI, research and DevOps to deliver products and insights Handle sensitive … the office (1 day in Oxford and 1 day in London) 🧰 Tech you'll use: Python, SQL, Spark, Kafka, Kubernetes, Docker, Airflow, RabbitMQ, AWS, DeltaLake ✅ You’ll thrive here if you: Believe in clean code, strong documentation and a test-first mindset Enjoy mentoring and levelling up More ❯
Constructor's Championship. You will be responsible for: Working with stakeholders to understand their data requirements Building pipelines to ingest data into the data lake Designing, building and deploying dashboards and reports in Power BI Performing data mapping and modelling Developing and updating technical documentation Implementing and owning a … of designing and building data models using Data Analysis Expressions (DAX) Hands-on experience with Azure tools: Azure Data Factory, Synapse, Databricks, SQL, Data Lake, Power BI, DeltaLake, and Spark. Ability to design, build, and deploy interactive user interfaces for interrogating data Experience of Power Automate More ❯
in Business Intelligence, with 5+ years in a BI leadership role in a global or matrixed organisation. Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & Batch Processing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance … BI will work within a modern, cloud-based BI ecosystem, including: Data Integration: Fivetran, HVR, Databricks, Apache Kafka, Google BigQuery, Google Analytics 4 Data Lake & Storage: Databricks DeltaLake, Amazon S3 Data Transformation: dbt Cloud Data Warehouse: Snowflake Analytics & Reporting: Power BI, Excel, Snowflake SQL REST API More ❯
in building/architecting data analytic solutions. 3 years of experience in building data platform using Azure services including Azure Databricks, Azure Data Factory, DeltaLake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and Spark SQL coding - Must have. More ❯
in building/architecting data analytic solutions. 3 years of experience in building data platform using Azure services including Azure Databricks, Azure Data Factory, DeltaLake, Azure Data Lake (ADLS), Power BI. Solid hands-on experience with Azure Databricks - Pyspark coding and Spark SQL coding - Must have. More ❯
we do Passion for data and experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. DeltaLake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and DeltaLake optimisation. Experience with ETL/ELT processes for integrating diverse data sources. Experience in gathering, documenting, and refining requirements from key business More ❯
new technologies and languages. Expertise in designing and building Big Data databases, analytics, and BI platforms. Strong understanding and experience in working with Databricks Delta Lake. Keen interest in the latest trends and tools in data engineering and analytics. Familiarity with graph databases (e.g., Neo4J/Cypher). Experience … assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS More ❯
new technologies and languages. Expertise in designing and building Big Data databases, analytics, and BI platforms. Strong understanding and experience in working with Databricks Delta Lake. Keen interest in the latest trends and tools in data engineering and analytics. Familiarity with graph databases (e.g., Neo4J/Cypher). Experience … assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, DeltaLake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS More ❯
experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, DeltaLake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue More ❯
Strong experience designing and delivering data solutions in the Databricks Data Intelligence platform, either on Azure or AWS. Good working knowledge of Databricks components: DeltaLake, Unity Catalog, ML Flow, etc. Expertise in SQL, Python and Spark (Scala or Python). Experience working with relational SQL databases either on premises or More ❯
orchestration from diverse data sources. Build scalable data integration processes to bring in data from on-premises , cloud , and external APIs into Azure Data Lake Storage (ADLS). Data Transformation & Modeling: Utilize Azure Data Factory/Databricks (PySpark/Scala) to build scalable data processing and transformation workflows for … improve query performance in Azure Synapse Analytics . Data Warehouse & Analytics Solutions: Collaborate with data architects and business stakeholders to design and implement data lake and data warehouse architectures on Azure. Create and optimize data models (star and snowflake schemas) for business intelligence (BI) and analytics workloads, ensuring high … own area of technical expertise at practice level. About you Strong experience in building and orchestrating complex data pipelines in ADF . Experience with DeltaLake for implementing data lakes and managing big data workloads efficiently. Strong knowledge of Synapse Analytics (formerly SQL Data Warehouse) for data warehousing More ❯
Master's or Ph.D. degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or related fields. Proven experience in Databricks and its ecosystem (Spark, DeltaLake, MLflow, etc.). Strong proficiency in Python and R for data analysis, machine learning, and data visualization. In-depth knowledge of cloud More ❯
customers to comprehend their business and technical needs, to develop tailored technical architectures and solutions in the Cloud, focusing on data engineering, data lakes, lake houses, business intelligence and machine learning/AI. Cost Optimization: You will be continuously trying to optimize run costs - both on platform level as … and supporting Big Data solutions for data lakes and data warehouses. Expertise in cloud-based Big Data solutions is required - preferably with Azure Data Lake and related technological stack: ADLS Gen2, Spark/Databricks, DeltaLake, Kafka/Events Hub, Stream Analytics, Azure Data Factory, Azure DevOps More ❯