focused data team responsible for building and optimising scalable, production-grade data pipelines and infrastructure. Key Responsibilities: Design and implement robust, scalable ETL/ELT pipelines using Databricks and ApacheSpark Ingest, transform, and manage large volumes of data from diverse sources Collaborate with analysts, data scientists, and business stakeholders to deliver clean, accessible datasets Ensure high performance … practices Work with cloud-native tools and services (preferably Azure ) Required Skills & Experience: Proven experience as a Data Engineer on cloud-based projects Strong hands-on skills with Databricks , ApacheSpark , and Python or Scala Proficient in SQL and working with large-scale data environments Experience with Delta Lake , Azure Data Lake , or similar technologies Familiarity with version More ❯
management and associated tools such as Git/Bitbucket. Experience in the use of CI/CD tools such as Jenkins or an understanding of their role. Experience with ApacheSpark or Hadoop. Experience in building data pipelines. Experience of designing warehouses, ETL pipelines and data modelling. Good knowledge in designing, building, using, and maintaining REST APIs. Good More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Made Tech Limited
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
experience working as a Software Engineer on large software applications Proficient in many of the following technologies - Python, REST, PyTorch, TensorFlow, Docker, FastAPI, Selenium, React, TypeScript, Redux, GraphQL, Kafka, Apache Spark. Experience working with one or more of the following database systems - DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools - JUnit, Mockito, PyTest, Selenium. Strong working knowledge More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
control providers, preferably Git Proven to be an excellent communicator with a clear passion for data & analytics. Deep engineering and database skills in SQL, Server Azure Synapse, Data factory, spark compute and Databricks technologies. Experience of developing coding standards and deliver methods that others follow. Experience of testing techniques, tools and approaches. Extensive experience of the full data lifecycle. More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Northrop Grumman Corp. (JP)
developing & deploying scalable backend systems. Familiarity with CICD, containerisation, deployment technologies & cloud platforms (Jenkins, Kubernetes, Docker, AWS) or Familiarity with Big Data and Machine Learning technologies (NumPy, PyTorch, TensorFlow, Spark). Excellent communication, collaboration & problem solving skills, ideally with some experience in agile ways of working. Security clearance: You must be able to gain and maintain the highest level More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
What you will bring Deep technical expertise in at least one data ecosystem (e.g. Oracle, SQL Server, Cloud Data platforms) plus real-time data streaming/processing technologies (Kafka, Spark). Data modelling and governance proficiency, covering conceptual, logical, and physical architectures, metadata management. Cloud data technologies: experience architecting solutions on AWS, Azure, or GCP, optimising cost, performance, and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Manchester Digital
key abilities or experience: Deep technical expertise in at least one data ecosystem (e.g. Oracle, SQL Server, Cloud Data platforms) plus real-time data streaming/processing technologies (Kafka, Spark). Data modelling and governance proficiency, covering conceptual, logical, and physical architectures, metadata management. Cloud data technologies: experience architecting solutions on AWS, Azure, or GCP, optimising cost, performance, and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
that help us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on More ❯