cloud providers and collaborate with many technology partners like AWS, GCP, Microsoft, Databricks, Snowflake, Confluent, and others. IF YOU ARE Experienced with Python/PySpark Proficient working with Databricks Lakehouse architecture and principles Having 2+ years of designing data models, building ETL pipelines, and wrangling data to solve business More ❯
and the broader Azure ecosystem. Requirements Proven experience as a Data Engineer working with Microsoft Fabric or related Azure data services. Knowledge of using PySpark in notebooks for data analysis and manipulation. Strong proficiency with SQL and data modelling. Experience with modern ELT/ETL tools within the Microsoft More ❯
ready for deployment. Clear communication and the capacity to articulate technical choices effectively are crucial. Must-Have Skills: 3+ years Databricks (Lakehouse, Delta Lake, PySpark, Spark SQL) 5+ years SQL Python Azure Excellent client-facing communication skills Experience deploying Databricks pipelines Experience provisioning Databricks as code Nice to Have More ❯
Future Talent Pool - GCP Data Engineer, London, hybrid role - digital Google Cloud transformation programme Proficiency in programming languages such as Python, PySpark and Java develop ETL processes for Data ingestion & preparation SparkSQL CloudRun, DataFlow, CloudStorage GCP BigQuery Google Cloud Platform Data Studio Unix/Linux Platform Version control tools More ❯
our Nottingham office. As a key member of the engineering team, you’ll help design, build, and maintain robust data pipelines using Python and PySpark , enabling powerful analytics and smarter business decisions across the organisation. What You'll Be Doing Design and build scalable ETL/ELT data pipelines … using Python and PySpark Lead and support data migration initiatives across legacy and cloud-based platforms Collaborate with analysts, data scientists, and stakeholders to deliver high-quality, reliable data solutions Ensure best practices in data engineering, including quality, testing, and performance tuning Contribute to the evolution of our data … What We're Looking For 3+ years’ experience as a Data Engineer or in a similar role Strong hands-on experience with Python and PySpark Proven experience in data migration and transformation projects Solid understanding of cloud-based data platforms (e.g., AWS, Azure, GCP – nice to have) Strong problem More ❯
nottingham, midlands, United Kingdom Hybrid / WFH Options
Accelero
our Nottingham office. As a key member of the engineering team, you’ll help design, build, and maintain robust data pipelines using Python and PySpark , enabling powerful analytics and smarter business decisions across the organisation. What You'll Be Doing Design and build scalable ETL/ELT data pipelines … using Python and PySpark Lead and support data migration initiatives across legacy and cloud-based platforms Collaborate with analysts, data scientists, and stakeholders to deliver high-quality, reliable data solutions Ensure best practices in data engineering, including quality, testing, and performance tuning Contribute to the evolution of our data … What We're Looking For 3+ years’ experience as a Data Engineer or in a similar role Strong hands-on experience with Python and PySpark Proven experience in data migration and transformation projects Solid understanding of cloud-based data platforms (e.g., AWS, Azure, GCP – nice to have) Strong problem More ❯
mansfield, midlands, United Kingdom Hybrid / WFH Options
Accelero
our Nottingham office. As a key member of the engineering team, you’ll help design, build, and maintain robust data pipelines using Python and PySpark , enabling powerful analytics and smarter business decisions across the organisation. What You'll Be Doing Design and build scalable ETL/ELT data pipelines … using Python and PySpark Lead and support data migration initiatives across legacy and cloud-based platforms Collaborate with analysts, data scientists, and stakeholders to deliver high-quality, reliable data solutions Ensure best practices in data engineering, including quality, testing, and performance tuning Contribute to the evolution of our data … What We're Looking For 3+ years’ experience as a Data Engineer or in a similar role Strong hands-on experience with Python and PySpark Proven experience in data migration and transformation projects Solid understanding of cloud-based data platforms (e.g., AWS, Azure, GCP – nice to have) Strong problem More ❯
Data Engineer (Azure PythonPySpark) Nottingham/WFH to £65k Are you a tech savvy Data Engineer seeking an opportunity to join a growing tech company where you can make a real difference and progress your career? You could be joining a rapidly growing Microsoft Solution Partner, that provide … to-target mappings and re-engineer manual data flows to enable scaling and repeatable use. You will use a range of technology including Python, PySpark, Azure Synapse Analytics, Azure Data Factory, Microsoft Fabric and Data Flows. Location/WFH: You can work from home most of the time, meeting … be within an hour commute to Nottingham). About you: You have experience as a Data Engineer You have strong Python coding skills and PySpark experience You have experience with Azure Data Factory, Azure Synapse Analytics, Data Flows, Data migrations You are collaborative with excellent communication skills, have a More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Client Server
to-target mappings and re-engineer manual data flows to enable scaling and repeatable use. You will use a range of technology including Python, PySpark, Cloudera, Azure Synapse Analytics, Azure Data Factory, Microsoft Fabric and Data Flows. Location/WFH: You can work from home most of the time … within an hour commute to Nottingham). About you: You have experience as a Senior Data Engineer You have strong Python coding skills and PySpark experience You have experience with Azure Data Factory, Azure Synapse Analytics, Data Flows, Data migrations, Microsoft Fabric and Cloudera You have ERP experience - particularly More ❯