FTPS), and remediation of security vulnerabilities (DAST, Azure Defender). Expertise in Python for writing efficient code and maintaining reusable libraries. Experienced with microservice design patterns, and Databricks/Spark for big data processing. Strong knowledge of SQL/NoSQL databases corresponding ELT workflows. Excellent problem-solving, communication, and collaboration skills in fast-paced environments. 3 years' professional experience More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Made Tech Limited
Strong experience in Infrastructure as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using ApacheSpark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling More ❯
experience working as a Software Engineer on large software applications Proficient in many of the following technologies - Python, REST, PyTorch, TensorFlow, Docker, FastAPI, Selenium, React, TypeScript, Redux, GraphQL, Kafka, Apache Spark. Experience working with one or more of the following database systems - DynamoDB, DocumentDB, MongoDB Demonstrated expertise in unit testing and tools - JUnit, Mockito, PyTest, Selenium. Strong working knowledge More ❯
of a forward-thinking company where data is central to strategic decision-making. We’re looking for someone who brings hands-on experience in streaming data architectures, particularly with Apache Kafka and Confluent Cloud, and is eager to shape the future of scalable, real-time data pipelines. You’ll work closely with both the core Data Engineering team and … the Data Science function, bridging the gap between model development and production-grade data infrastructure. What You’ll Do: Design, build, and maintain real-time data streaming pipelines using Apache Kafka and Confluent Cloud. Architect and implement robust, scalable data ingestion frameworks for batch and streaming use cases. Collaborate with stakeholders to deliver high-quality, reliable datasets to live … experience in a Data Engineering or related role. Strong experience with streaming technologies such as Kafka, Kafka Streams, and/or Confluent Cloud (must-have). Solid knowledge of ApacheSpark and Databricks. Proficiency in Python for data processing and automation. Familiarity with NoSQL technologies (e.g., MongoDB, Cassandra, or DynamoDB). Exposure to machine learning pipelines or close More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
First Central Services
control providers, preferably Git Proven to be an excellent communicator with a clear passion for data & analytics. Deep engineering and database skills in SQL, Server Azure Synapse, Data factory, spark compute and Databricks technologies. Experience of developing coding standards and deliver methods that others follow. Experience of testing techniques, tools and approaches. Extensive experience of the full data lifecycle. More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
TALENT INTERNATIONAL UK LTD
data systems using Azure Data Lake, Blob Storage, Cosmos DB, and SQL Database Build and manage ETL pipelines with Azure Data Factory Develop big data workflows using Databricks and ApacheSpark Write and maintain infrastructure code using Terraform (not just using modules writing the code is essential) Monitor and improve cloud performance, security, and documentation Mentor junior engineers More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Northrop Grumman Corp. (JP)
developing & deploying scalable backend systems. Familiarity with CICD, containerisation, deployment technologies & cloud platforms (Jenkins, Kubernetes, Docker, AWS) or Familiarity with Big Data and Machine Learning technologies (NumPy, PyTorch, TensorFlow, Spark). Excellent communication, collaboration & problem solving skills, ideally with some experience in agile ways of working. Security clearance: You must be able to gain and maintain the highest level More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
Salford, Manchester, United Kingdom Hybrid / WFH Options
Manchester Digital
the ability to pivot strategies in response to innovative technologies, insights, or regulatory developments. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Snowflake, Spark). Strong communication skills, with the ability to distill complex data concepts into clear messages for non-technical stakeholders. Excellent stakeholder management and cross-functional collaboration skills, with the More ❯
What you will bring Deep technical expertise in at least one data ecosystem (e.g. Oracle, SQL Server, Cloud Data platforms) plus real-time data streaming/processing technologies (Kafka, Spark). Data modelling and governance proficiency, covering conceptual, logical, and physical architectures, metadata management. Cloud data technologies: experience architecting solutions on AWS, Azure, or GCP, optimising cost, performance, and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Manchester Digital
key abilities or experience: Deep technical expertise in at least one data ecosystem (e.g. Oracle, SQL Server, Cloud Data platforms) plus real-time data streaming/processing technologies (Kafka, Spark). Data modelling and governance proficiency, covering conceptual, logical, and physical architectures, metadata management. Cloud data technologies: experience architecting solutions on AWS, Azure, or GCP, optimising cost, performance, and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
that help us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on More ❯
a Product Owner, ideally working within a role in a data-rich environment or similar. Solid understanding of data engineering concepts, including data pipelines, cloud platforms (e.g. GCP, Hadoop, Spark), and data governance. Proven ability to drive delivery in agile environments, handling contending priorities and tight deadlines. Excellent collaborator leadership skills, with the ability to influence across technical and More ❯
Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to move data from source systems to date stores.* Experience with one or more of the following supporting technologies: Apache, Kafka, NiFi, Spark, Flink or Airflow etc.* Past experience working with SQL and NoSQL databases (e.g. PostgreSQL, Mongo, Elasticsearch, Accumulo or Neo4j).* Hands-on experience with distributed More ❯