About Pret Pret is an international on-the-go food and drink retailer founded in1986. Our purpose is to make every day a little bit brighter, through organiccoffee, freshly prepared food, and exceptional customer service to millions ofpeople. Pret runs More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Scott Logic
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge
with large-scale datasets using Azure Data Factory (ADF) and Databricks. Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation. Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing. Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics. Experience working with Delta Lake for schema … evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference. Experience with data More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
Continuous integration/deployments, (CI/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building ETL/ELT processes and More ❯
similar role. Experience with data pipeline, ETL, and workflow management tools (e.g., Databricks, Data Factory). Proficiency in SQL, Python, R, or Scala. Knowledge of Python libraries such as PySpark and Pandas. Experience with SQL and database management systems like MySQL, PostgreSQL, SQL Server. Strong problem-solving, analytical, and communication skills. Experience with data architecture, modeling, and ETL processes. More ❯
and workflow management tools (e.g., Databricks, Data Factory). Proficiency in programming languages such as SQL, Python, R, or Scala. Substantial knowledge and experience with Python libraries such as PySpark and Pandas. Strong experience with SQL and database management (e.g., MySQL, PostgreSQL, SQL Server). Excellent problem-solving and analytical skills. Strong understanding of data architecture, data modelling, and More ❯
Press Tab to Move to Skip to Content Link Select how often (in days) to receive an alert: Company: Royal London Group Job Title: Data Engineer Contract Type: Permanent Location: Edinburgh or Glasgow or Alderley Edge Working style: Hybrid 50 More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
TieTalent
engineering experience with data platforms and analytics support Hands-on experience with Azure Data ecosystem: Databricks, Data Factory, Data Lake, Synapse; certifications are a plus Proficiency in Python, with PySpark experience preferred Strong SQL skills Experience in building and maintaining data pipelines Managing DevOps pipelines Skills in process optimization, performance tuning, data modeling, and database design Desirable Experience in … Zero goal by 2050. We value diversity and inclusion; your anonymized diversity data helps us improve our outreach and inclusion efforts. Nice-to-Have Skills Python, Azure, Data Factory, PySpark, SQL, DevOps Work Experience & Languages Data Engineer, Data Infrastructure English Seniorities and Job Details Entry level Contract IT & Internet industry #J-18808-Ljbffr More ❯
/Azure - or Snowflake/Redshift/BigQuery) (Required) Experience with infrastructure as code (e.g. Terraform) (Required) Proficiency in using Python both for scheduling (e.g. Airflow) and manipulating data (PySpark) (Required) Experience building deployment pipelines (e.g. Azure Pipelines) (Required) Deployment of web apps using Kubernetes (Preferably ArgoCD & Helm) (Preferred) Experience working on Analytics and Data Science enablement (dbt, DS More ❯
using Azure Synapse, ensuring data integrity and security Build, deploy, and manage ETL processes to support real-time and batch data processing using tooling across the Azure estate, Databricks, PySpark, and SQL Oversee data storage across both relational and non-relational databases, ensuring efficient data retrieval Design and implement data security protocols to safeguard sensitive information Collaborate with DBAs … continuous learning and improvement What we are looking for: 5+ years of experience in data engineering Expertise in Azure DWH and AWS Databricks Strong programming skills in Python/PySpark or other relevant languages for data manipulation and ETL workflows Proficiency in SQL and experience with both relational (e.g., SQL Server, MySQL) and non-relational databases (e.g., MongoDB, Cassandra More ❯
working with CI/CD tools Key Technology: Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications: You will need to be you. Curious about technology and adaptable to new technologies Agile-minded, optimistic, passionate, and pragmatic about delivering valuable data solutions to customers Willing More ❯
warehouse and data infrastructure to support advanced analytics and reporting needs for a fast-growing organisation. Key Responsibilities: Design, develop, and maintain scalable data pipelines using SQL and Python (PySpark) . Ingest, transform, and curate data from multiple sources into Azure Data Lake and Delta Lake formats. Build and optimize datasets for performance and reliability in Azure Databricks . … to governance policies. Monitor and troubleshoot production jobs and processes. Preferred Skills & Experience: Strong proficiency in SQL for data transformation and performance tuning. Solid experience with Python , ideally using PySpark in Azure Databricks . Hands-on experience with Azure Data Lake Storage Gen2 . Understanding of data warehouse concepts , dimensional modelling , and data architecture . Experience working with Delta More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
First Central Services
Location: Guernsey, Haywards Heath, Home Office (Remote) or Manchester Salary: £50,000 - £77,500 - depending on experience Department: Technology and Data We’re 1st Central, a market-leading insurance company utilising smart data and technology at pace. Rapid growth has More ❯
About Pret Pret is an international on-the-go food and drink retailer founded in 1986. Our purpose is to make every day a little bit brighter, through organic coffee, freshly prepared food, and exceptional customer service to millions of More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Client Server
scientific discipline, backed by minimum A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks You're collaborative with excellent More ❯
Databricks environments and developing lakehouse architectures with a focus on automation, performance tuning, cost optimisation, and system reliability. Proven proficiency in programming languages such as Python, T-SQL, and PySpark, with practical knowledge of test-driven development. Demonstrated capability in building secure, scalable data solutions on Azure with an in-depth understanding of data security and regulatory compliance, using More ❯
transaction processing with maintaining and strengthening the modelling standards and business information. ͏ Key Responsibilities: Build and optimize Prophecy data pipelines for large scale batch and streaming data workloads using Pyspark Define end-to-end data architecture leveraging prophecy integrated with databricks or Spark or other cloud-native compute engines Establish coding standards, reusable components, and naming conventions using Prophecy … exposure to convert legacy etl tools like datastage, informatica into Prophecy pipelines using Transpiler component of Prophecy Required skill & experience: 2+ years of hands-on experience with Prophecy (Using pyspark) approach 5+ years of experience in data engineering with tools such as Spark, Databricks,scala/Pyspark or SQL Strong understanding of ETL/ELT pipelines, distributed data More ❯
to be part of a team that’s transforming how data powers retail, this is your opportunity. Your Role (Key Responsibilities) Design, build, and optimise robust data pipelines using PySpark, SparkSQL, and Databricks to ingest, transform, and enrich data from a variety of sources. Translate business requirements into scalable and performant data engineering solutions, working closely with squad members More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Matillion Limited
presentation skills, with the ability to engage both technical and non-technical stakeholders. Desirable Criteria Experience with Matillion products and competitive ETL solutions. Knowledge of big data technologies (Spark, PySpark), data lakes, and MPP databases (Teradata, Vertica, Netezza). Familiarity with version control tools such as Git, and experience with Python. Degree in Computer Science or related field (or More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
MAG (Airports Group)
commercial, and operations — and this role will have a big say in what we build next. You’ll be responsible for designing and building robust, scalable data pipelines using PySpark, SQL and Databricks — enabling our analytics, BI and data science colleagues to unlock real value across the business. This is a brilliant opportunity for someone who’s passionate about … your expertise further — especially with tools like Databricks. Here’s what will help you thrive in this role: 2–5 years in data engineering or a related field Strong PySpark and advanced SQL skills Practical experience building and maintaining ETL/ELT pipelines in Databricks Familiarity with CI/CD pipelines and version control practices Nice to have: Experience More ❯
Date: 21 Mar 2025 Location: Edinburgh, GB Macclesfield, GB Glasgow, GB Company: Royal London Group Contract Type: Permanent Location: Wilmslow or Edinburgh or Glasgow Working style: Hybrid 50% home/office based The Group Data Office (GDO) is responsible for More ❯
Date: 21 Mar 2025 Location: Edinburgh, GB Macclesfield, GB Glasgow, GB Company: Royal London Group Contract Type: Permanent Location: Wilmslow or Edinburgh or Glasgow Working style: Hybrid 50% home/office based The Group Data Office (GDO) is responsible for More ❯
enhance cloud capabilities. Key Skills & Experience: Strong proficiency in SQL and Python. Experience in cloud data solutions (AWS, GCP, or Azure). Experience in AI/ML. Experience with PySpark or equivalent. Strong problem-solving and analytical skills. Excellent attention to detail. Ability to manage stakeholder relationships effectively. Strong communication skills and a collaborative approach. Why Join Us? Work More ❯
enhance cloud capabilities. Key Skills & Experience: Strong proficiency in SQL and Python. Experience in cloud data solutions (AWS, GCP, or Azure). Experience in AI/ML. Experience with PySpark or equivalent. Strong problem-solving and analytical skills. Excellent attention to detail. Ability to manage stakeholder relationships effectively. Strong communication skills and a collaborative approach. Why Join Us? Work More ❯