Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
you prefer Exceptional Benefits : From unlimited holiday and private healthcare to stock options and paid parental leave. What You'll Be Doing: Build and maintain scalable data pipelines using Spark with Scala and Java, and support tooling in Python Design low-latency APIs and asynchronous processes for high-volume data. Collaborate with Data Science and Engineering teams to deploy … Contribute to the development of Gen AI agents in-product. Apply best practices in distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with More ❯
Engineers work alongside Machine learning engineers, BI Developers and Data Scientists in cross-functional teams with key impacts and visions. Using your skills with SQL, Python, data modelling and Spark to ingest and transform high volume complex raw event data into user-friendly high impact tables. As a department we strive to give our Data Engineers have high levels … Be deploying applications to the Cloud (AWS) We'd love to hear from you if you Have strong experience with Python & SQL Have experience developing data pipelines using dbt, Spark and Airflow Have experience Data modelling (building optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with More ❯
It has come to our notice that Fractal Analytics' name and logo are being misused by certain unscrupulous persons masquerading as Fractal's authorized representatives to approach job seekers to part with sensitive personal information and/or money in More ❯
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
West Midlands, United Kingdom Hybrid / WFH Options
Experis
data pipelines within enterprise-grade on-prem systems. Key Responsibilities: Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … platform. Ensure compliance with enterprise security and data governance standards. Required Skills & Experience: Minimum 5 years of experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
with knowledge of messaging systems, backed by a comprehensive background across a variety of middleware technologies, including commercial, open-source, and custom-built solutions. You will have experience with Apache Kafka, including administration, configuration, and troubleshooting in production environments. You'll have the opportunity to: Design, develop and implement the Kafka ecosystem by creating a framework for using technologies … other streaming-oriented technology. Help build the DevOps strategy for hosting and managing our SDP microservice and connector infrastructure in AWS cloud. Design and implement big data technologies around Apache Hadoop, Kafka streaming, No SQL, Java/J2EE and distributed computing platforms. Participate in Agile development projects for enterprise-level systems component design and implementation. Apply enterprise software design … including S3, EFS, MSK, ECS, and EMR. Experience with RDBMS. Experience with Jenkins CI/CD pipeline. Bachelor's degree in a technical discipline. Plus: Knowledge of Hadoop/Spark and various data formats like Parquet, CSV, etc. Additional Information Benefits/Perks: Great compensation package and bonus plan Core benefits including medical, dental, vision, and matching 401K Flexible More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Anson McCade
Lead Data Engineer Location: Leeds (hybrid) Salary: Up to £70,000 (depending on experience) + bonus Clearance Requirement: Candidates must be eligible for UK National Security Vetting. We're looking for an experienced Lead Data Engineer to join a fast More ❯
within enterprise-grade on-prem systems. Job Responsibilities/Objectives Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. Build and optimize workflows using Apache Airflow and Spark Streaming for real-time data processing. Develop robust data engineering solutions using Python for automation and transformation. Collaborate with infrastructure and analytics teams to support … security and data governance standards. Required Skills/Experience The ideal candidate will have the following: Strong experience in Hadoop and data engineering. Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. Exposure to data analytics, preferably involving infrastructure or operational data. Experience More ❯
part of an Agile engineering or development team Strong hands-on experience and understanding of working in a cloud environment such as AWS Experience with EMR (Elastic Map Reduce), Spark Strong experience with CI/CD pipelines with Jenkins Experience with the following technologies: SpringBoot, Gradle, Terraform, Ansible, GitHub/GitFlow, PCF/OCP/Kubernetes technologies, Artifactory, IaC … part of an Agile engineering or development team Strong hands-on experience and understanding of working in a cloud environment such as AWS Experience with EMR (Elastic Map Reduce), Spark Strong experience with CI/CD pipelines with Jenkins Experience with the following technologies: SpringBoot, Gradle, Terraform, Ansible, GitHub/GitFlow, PCF/OCP/Kubernetes technologies, Artifactory, IaC More ❯
Nottingham, Nottinghamshire, United Kingdom Hybrid / WFH Options
Rullion - Eon
Join our client in embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. To support this, we are recruiting for talented data engineers. As a major UK energy provider, our client More ❯
frameworks (e.g., TensorFlow, PyTorch, XGBoost). Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments (e.g., Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Understanding of contact center technologies and platforms (e.g., Avaya, Genesys, Cisco, Mitel, Twilio, etc.). Understanding of the practical concerns More ❯
roles 5+ years of experience in big data technology with experience ranging from platform architecture, data management, data architecture and application architecture High Proficiency working with Hadoop platform including Spark/Scala, Kafka, SparkSQL, HBase, Impala, Hive and HDFS in multi-tenant environments Solid base in data technologies like warehousing, ETL, MDM, DQ, BI and analytical tools extensive experience … of distributed, fault-tolerant applications with attention to security, scalability, performance, availability and optimization Requirements 4+ years of hands-on experience in designing, building and supporting Hadoop Applications using Spark, Scala, Sqoop and Hive. Strong knowledge of working with large data sets and high capacity big data processing platform. Strong experience in Unix and Shell scripting. Experience using Source More ❯
. Manage and monitor the cost, efficiency, and speed of data processing. Our Data Tech Stack Azure Cloud (SQL Server, Databricks, Cosmos DB, Blob Storage) ETL/ELT (Python, Spark, SQL) Messaging (Service Bus, Event Hub) DevOps (Azure DevOps, Github Actions, Terraform) Who you are A driven, ambitious individual who's looking to build their career at an exciting … building and maintaining robust and scalable data pipelines Proficiency in ELT and ETL processes and tools Ability to write efficient code for data extraction, transformation, and loading (eg. Python, Spark and SQL) Proficiency with cloud platforms (particularly Azure Databricks and SQL Server) Ability to work independently Ability to communicate complex technical concepts clearly to both technical and non-technical More ❯
. Manage and monitor the cost, efficiency, and speed of data processing. Our Data Tech Stack Azure Cloud (SQL Server, Databricks, Cosmos DB, Blob Storage) ETL/ELT (Python, Spark, SQL) Messaging (Service Bus, Event Hub) DevOps (Azure DevOps, Github Actions, Terraform) Who you are A driven, ambitious individual who’s looking to build their career at an exciting … building and maintaining robust and scalable data pipelines Proficiency in ELT and ETL processes and tools Ability to write efficient code for data extraction, transformation, and loading (eg. Python, Spark and SQL) Proficiency with cloud platforms (particularly Azure Databricks and SQL Server) Ability to work independently Ability to communicate complex technical concepts clearly to both technical and non-technical More ❯
leasing companies, as well as driving data-driven decision making for Cox Automotive. You'll collaborate with a talented team, using open-source tools such as R, Python, and Spark, data visualisation tools like Power BI, and Databricks data platform. Key Responsibilities: Develop and implement analytics strategies that provide actionable insights for our business and clients. Apply the scientific … managing workloads and meeting project deadlines. Strong collaborative spirit, working seamlessly with team members and external clients. Proficiency in R or Python. Solid understanding of SQL; experience working with Spark (Java, Python, or Scala variants) and cloud platforms like Databricks is a plus. Strong statistical knowledge, including hypothesis testing, confidence intervals, and A/B testing. Ability to understand More ❯
Data Engineer/Technical Support Engineer - Client Facing (Remote - UK) Location: 3 days per week in the office (Office in Sheffield, UK) Contract: 6-Month Contract Rate: £400 per day - Inside IR35 Role Overview: We are looking for a highly More ❯
Data Engineer/Technical Support Engineer - Client Facing (Remote - UK) Location: 3 days per week in the office (Office in Sheffield, UK) Contract: 6-Month Contract Rate: £400 per day - Inside IR35 Role Overview: We are looking for a highly More ❯
to ensure code is fit for purpose Experience that will put you ahead of the curve Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure SQL development skills Experience using Dataform or dbt Demonstrated strength in data modelling, ETL development, and data warehousing Knowledge … to be a part of it! Our Future, Our Responsibility - Inclusion and Diversity at Future We embrace and celebrate diversity, making it part of who we are. Different perspectives spark ideas, fuel creativity, and push us to innovate. That's why we're building a workplace where everyone feels valued, respected, and empowered to thrive. When it comes to More ❯
to build pipelines for ingesting Risk data across various systems and provide access to Risk data to downstream consumers including Finance, Backoffice and Accounting systems. The role of the Spark Developer is to understand the existing data flows and requirements and to come up with technical design/architecture. Robust technical frameworks need to be built which will be … the role provided they have the necessary skills and experience Development experience with expertise in the following: Java Server side development working in Low latency applications Financial background preferable Spark expertise (micro batching, EOD/real time) Python In-memory databases SQL Skills & RDBMS concepts Linux Hadoop Ecosystem (HDFS, Impala, HIVE, HBASE, etc.) Python , R or equivalent scripting language More ❯
equivalent education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL More ❯
equivalent education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL More ❯
equivalent education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
leading innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools … scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with Gen More ❯
leading innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools … scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with Gen More ❯
visualization and using geospatial tools such as ESRI tools and platforms, to support analysis or visualization Experience with PAI Experience in Python or Java, and using tools such as Spark or Databricks, to support large-scale data processing, analysis, and visualization Experience with machine learning, automation, and artifi cia l intelligence techniques applied to data analysis Experience working with … visualization and using geospatial tools such as ESRI tools and platforms, to support analysis or visualization Experience with PAI Experience in Python or Java, and using tools such as Spark or Databricks, to support large-scale data processing, analysis, and visualization Experience with machine learning, automation, and artifi cia l intelligence techniques applied to data analysis Experience working with More ❯