slough, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
having delivered in Microsoft Azure Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure Data Engineering certifications Databricks certifications What's in it for you: 📍Location: London (Hybrid) 💻Remote working: Occasional More ❯
and DevOps practices for data workflows. What We’re Looking For Proven experience as a Data Engineer or in a similar role. Strong coding skills in SQL , Python , and PySpark . Hands-on experience with Azure cloud services. Proficiency in Power BI , including DAX , semantic models , and Power Query M . Experience with Microsoft Fabric and modern data architecture. More ❯
Birmingham, West Midlands, England, United Kingdom
TXP
and DevOps practices for data workflows. What We're Looking For Proven experience as a Data Engineer or in a similar role. Strong coding skills in SQL , Python , and PySpark . Hands-on experience with Azure cloud services. Proficiency in Power BI , including DAX , semantic models , and Power Query M . Experience with Microsoft Fabric and modern data architecture. More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
london (city of london), south east england, united kingdom
develop
leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and CI More ❯
City of London, London, United Kingdom Hybrid / WFH Options
8Bit - Games Industry Recruitment
improve AI models over time. REQUIREMENTS 2 years of proven experience in data engineering for ML/AI, with strong proficiency in Python, SQL, and distributed data processing (e.g., PySpark). Hands-on experience with cloud data platforms (GCP, AWS, or Azure), orchestration frameworks (e.g., Airflow), and ELT/ETL tools. Familiarity with 2D and 3D data formats (e.g. More ❯
improve AI models over time. REQUIREMENTS 2 years of proven experience in data engineering for ML/AI, with strong proficiency in Python, SQL, and distributed data processing (e.g., PySpark). Hands-on experience with cloud data platforms (GCP, AWS, or Azure), orchestration frameworks (e.g., Airflow), and ELT/ETL tools. Familiarity with 2D and 3D data formats (e.g. More ❯
london, south east england, united kingdom Hybrid / WFH Options
8Bit - Games Industry Recruitment
improve AI models over time. REQUIREMENTS 2 years of proven experience in data engineering for ML/AI, with strong proficiency in Python, SQL, and distributed data processing (e.g., PySpark). Hands-on experience with cloud data platforms (GCP, AWS, or Azure), orchestration frameworks (e.g., Airflow), and ELT/ETL tools. Familiarity with 2D and 3D data formats (e.g. More ❯
slough, south east england, united kingdom Hybrid / WFH Options
8Bit - Games Industry Recruitment
improve AI models over time. REQUIREMENTS 2 years of proven experience in data engineering for ML/AI, with strong proficiency in Python, SQL, and distributed data processing (e.g., PySpark). Hands-on experience with cloud data platforms (GCP, AWS, or Azure), orchestration frameworks (e.g., Airflow), and ELT/ETL tools. Familiarity with 2D and 3D data formats (e.g. More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
8Bit - Games Industry Recruitment
improve AI models over time. REQUIREMENTS 2 years of proven experience in data engineering for ML/AI, with strong proficiency in Python, SQL, and distributed data processing (e.g., PySpark). Hands-on experience with cloud data platforms (GCP, AWS, or Azure), orchestration frameworks (e.g., Airflow), and ELT/ETL tools. Familiarity with 2D and 3D data formats (e.g. More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and More ❯
data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and More ❯
data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and More ❯
data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and More ❯
london (city of london), south east england, united kingdom
Fimador
data platforms, and integrations, while ensuring solutions meet regulatory standards and align with architectural best practices. Key Responsibilities: Build and optimise scalable data pipelines using Databricks and Apache Spark (PySpark). Ensure performance, scalability, and compliance (GxP and other standards). Collaborate on requirements, design, and backlog refinement. Promote engineering best practices including CI/CD, code reviews, and More ❯
data-driven culture all within a collaborative environment that values innovation and ownership. Tech you’ll be working with: Azure Data Lake Azure Synapse Databricks Data Factory Python, SQL, PySpark Terraform, GitHub Actions, CI/CD pipelines You’ll thrive here if you: Have strong experience building and leading Azure-based data platforms Enjoy mentoring and guiding other engineers More ❯
data-driven culture all within a collaborative environment that values innovation and ownership. Tech you’ll be working with: Azure Data Lake Azure Synapse Databricks Data Factory Python, SQL, PySpark Terraform, GitHub Actions, CI/CD pipelines You’ll thrive here if you: Have strong experience building and leading Azure-based data platforms Enjoy mentoring and guiding other engineers More ❯
data-driven culture all within a collaborative environment that values innovation and ownership. Tech you’ll be working with: Azure Data Lake Azure Synapse Databricks Data Factory Python, SQL, PySpark Terraform, GitHub Actions, CI/CD pipelines You’ll thrive here if you: Have strong experience building and leading Azure-based data platforms Enjoy mentoring and guiding other engineers More ❯
Essential Skills Include: Proven leadership and mentoring experience in senior data engineering roles Expertise in Azure Data Factory, Azure Databricks, and lakehouse architecture Strong programming skills (Python, T-SQL, PySpark) and test-driven development Deep understanding of data security, compliance, and tools like Microsoft Purview Excellent communication and stakeholder management skills Experience with containerisation and orchestration (e.g., Kubernetes, Azure More ❯
data-driven culture all within a collaborative environment that values innovation and ownership. Tech you’ll be working with: Azure Data Lake Azure Synapse Databricks Data Factory Python, SQL, PySpark Terraform, GitHub Actions, CI/CD pipelines You’ll thrive here if you: Have strong experience building and leading Azure-based data platforms Enjoy mentoring and guiding other engineers More ❯