London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
AI to move faster and smarter. You will be experienced in AI and enjoy writing code. Responsibilities Build and maintain scalable distributed systems using Scala and Java Design complex Spark jobs, asynchronous APIs, and parallel processes Use Gen AI tools to enhance development speed and quality Collaborate in Agile teams to improve their data collection pipelines Apply best practices … structures, algorithms, and design patterns effectively Foster empathy and collaboration within the team and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience More ❯
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data architects, analysts, and stakeholders, you'll help unlock the value of data across the organisation. Key Responsibilities: Develop and optimise data pipelines using Azure Data Factory, Databricks, and Spark Design and implement scalable data solutions in Azure cloud environments Collaborate with cross-functional teams to understand data requirements Ensure data quality, integrity, and security across platforms Support the … models and advanced analytics Monitor and troubleshoot data workflows and performance issues Requirements: Proven experience with Azure Data Services (Data Factory, Databricks, Synapse) Strong knowledge of Python, SQL, and Spark Experience with data modelling, ETL/ELT processes, and pipeline orchestration Familiarity with CI/CD and DevOps practices in a data engineering context Excellent communication and stakeholder engagement More ❯
scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet the … Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best practice guidance, e.g. ITIL, OGC toolkit. Additional Requirements More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
Actica Consulting
scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet the … Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best practice guidance, e.g. ITIL, OGC toolkit. Additional Requirements More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Actica Consulting
scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet the … Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best practice guidance, e.g. ITIL, OGC toolkit. Additional Requirements More ❯
technology to solve a given problem. Right now, we use: • A variety of languages, including Java and Go for backend and TypeScript for frontend • Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux • Industry-standard build tooling, including Gradle, CircleCI, and GitHub What We Value Passion for helping other developers build better applications. Empathy for the impact your More ❯
with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of distributed systems and microservices architecture Expert in data pipeline platforms (Apache Kafka, Airflow, Spark) Proficient in both SQL (PostgreSQL, MySQL) and NoSQL (Elasticsearch, MongoDB) databases Strong containerization and orchestration skills (Docker, Kubernetes) Experience with infrastructure as code (Terraform, CloudFormation More ❯
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
Columbia, Maryland, United States Hybrid / WFH Options
Codescratch LLC
pipelines. Understanding of AGILE software development methodologies and use of standard software development tool suites. Preferred Skills and Experience: Experience with Docker and Kubernetes Experience with Hadoop Experience with Spark Experience with Accumulo Experience monitoring application performance with metrics (Prometheus, InfluxDB, Grafana) and logs with ELK Stack (ElsticSearch, Logstash, Kibana) Experience with asynchronous messaging systems (RabbitMQ, Apache Kafka More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Anson McCade
Lead Data Engineer Location: Leeds (hybrid) Salary: Up to £70,000 (depending on experience) + bonus Clearance Requirement: Candidates must be eligible for UK National Security Vetting. We're looking for an experienced Lead Data Engineer to join a fast More ❯
bradford, yorkshire and the humber, united kingdom Hybrid / WFH Options
Anson McCade
Lead Data Engineer Location: Leeds (hybrid) Salary: Up to £70,000 (depending on experience) + bonus Clearance Requirement: Candidates must be eligible for UK National Security Vetting. We're looking for an experienced Lead Data Engineer to join a fast More ❯
Columbia, South Carolina, United States Hybrid / WFH Options
Systemtec Inc
technologies and cloud-based technologies AWS Services, State Machines, CDK, Glue, TypeScript, CloudWatch, Lambda, CloudFormation, S3, Glacier Archival Storage, DataSync, Lake Formation, AppFlow, RDS PostgreSQL, Aurora, Athena, Amazon MSK, Apache Iceberg, Spark, Python ONSITE: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed. Standard work hours: 8:30 AM - 5:00 PM Required Qualifications of More ❯
Nottingham, Nottinghamshire, United Kingdom Hybrid / WFH Options
Rullion - Eon
Join our client in embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. To support this, we are recruiting for talented data engineers. As a major UK energy provider, our client More ❯
equivalent education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
leading innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools … scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with Gen More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
leading innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools … scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with Gen More ❯