services as well as customer deployments. Real-time data pipelines and edge computing are key pillars of the Ferry platform for which we augment Apache Flink and cloud IoT platforms to support this. Who you are Minimum 7+ years as a Backend Engineer Thorough understanding & experience in Java Deep … thorough understanding of Apache Flink Experience with Kafka Comprehensive knowledge and experience building, testing and deploying APIs Comprehensive knowledge of design patterns and development best practices Comprehensive knowledge of object-oriented design, data structures, algorithms and problem solving Deep understanding & knowledge of testing frameworks Thorough knowledge of Git and More ❯
microservices Integrate with and optimize data persistence layers using MongoDB (preferably MongoDB Atlas) , Redis , or DynamoDB Implement distributed caching strategies with Redis, Hazelcast, or Apache Ignite Work closely with DevOps to containerize applications using Docker Must-Have Qualifications 5+ years of hands-on experience in backend development with Java … Production experience with NoSQL databases (MongoDB preferred; DynamoDB, Redis, or similar are a plus) Experience with distributed caching systems such as Redis, Hazelcast, or Apache Ignite Proficiency with Docker and containerized deployments Experience with cloud-based environments (AWS, GCP, or Azure; MongoDB Atlas a strong plus More ❯
microservices Integrate with and optimize data persistence layers using MongoDB (preferably MongoDB Atlas) , Redis , or DynamoDB Implement distributed caching strategies with Redis, Hazelcast, or Apache Ignite Work closely with DevOps to containerize applications using Docker Must-Have Qualifications 5+ years of hands-on experience in backend development with Java … Production experience with NoSQL databases (MongoDB preferred; DynamoDB, Redis, or similar are a plus) Experience with distributed caching systems such as Redis, Hazelcast, or Apache Ignite Proficiency with Docker and containerized deployments Experience with cloud-based environments (AWS, GCP, or Azure; MongoDB Atlas a strong plus More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark. You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering collaborative … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark is a plus Where You'll Be This role is based in London (UK). We offer you the flexibility to More ❯
have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering collaborative … Platform is a plus Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark is a plus Where You'll Be This role is based in London (UK) We offer you the flexibility to work More ❯
London, England, United Kingdom Hybrid / WFH Options
Focus on SAP
Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of Apache Spark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP). Understanding of modern data architectures (e.g., Lakehouse, ELT/… Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage (Delta More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Focus on SAP
Hybrid Languages: English Key skills: 5+ years of Data Engineer. Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog). Strong command of Apache Spark, SQL, and Python. Hands-on experience with cloud platforms (AWS, Azure, or GCP). Understanding of modern data architectures (e.g., Lakehouse, ELT/… Rights to work in the UK is must (No Sponsorship available) Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark. Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets. Optimize data workflows and storage (Delta More ❯
technical leadership role on projects and experience with transitioning projects into a support program. Experience with Google Cloud Platform (GCP) services, including Cloud Composer (Apache Airflow) for workflow orchestration. Strong experience in Python with demonstrable experience in developing and maintaining data pipelines and automating data workflows. Proficiency in SQL … e.g., Git). Strong expertise in Python, with a particular focus on libraries and tools commonly used in data engineering, such as Pandas, NumPy, Apache Airflow. Experience with data pipelines, ELT/ETL processes, and data wrangling. Dashboard analytics (PowerBI, Looker Studio or Tableau) experience. Excellent English, written and More ❯
London, England, United Kingdom Hybrid / WFH Options
Focus on SAP
years hands-on with Google Cloud Platform. Strong experience with BigQuery, Cloud Storage, Pub/Sub, and Dataflow. Proficient in SQL, Python, and Apache Beam. Familiarity with DevOps and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, or similar tools for infrastructure automation. Understanding of … available) Responsibilities: Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP) Develop ETL processes using tools like Cloud Dataflow, Apache Beam, BigQuery, and Cloud Composer Collaborate with data analysts, scientists, and business stakeholders to understand data requirements Optimize performance and cost-efficiency of GCP More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Focus on SAP
years hands-on with Google Cloud Platform. Strong experience with BigQuery, Cloud Storage, Pub/Sub, and Dataflow. Proficient in SQL, Python, and Apache Beam. Familiarity with DevOps and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, or similar tools for infrastructure automation. Understanding of … available) Responsibilities: Design, build, and maintain scalable and reliable data pipelines on Google Cloud Platform (GCP) Develop ETL processes using tools like Cloud Dataflow, Apache Beam, BigQuery, and Cloud Composer Collaborate with data analysts, scientists, and business stakeholders to understand data requirements Optimize performance and cost-efficiency of GCP More ❯
Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one. Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals. Familiarity with CI/CD for production deployments. Working knowledge of MLOps. Design and deployment of performant … data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our More ❯
want you to find your spark. Because that’s what drives you to be better, be more and ultimately, be more fulfilled. Job title: Apache Hadoop Engineer Work model: Hybrid Location: London, UK Mode of Job: Fulltime/contract Job description: The Apache Hadoop project requires up to … designing and building platforms and supporting applications both in cloud environments and on-premises. These resources are expected to be open-source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during … delivery. Job responsibilities: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open-source platform based on Apache framework for Hadoop. Experience in integrating Infra-as-a-Code in their platform (Bespoke implementation from scratch) Experience in design & architect work for the open More ❯
want you to find your spark. Because that’s what drives you to be better, be more and ultimately, be more fulfilled. Job title: Apache Hadoop Engineer Work model: Hybrid Location: London, UK Mode of Job: Fulltime/contract Job description: The Apache Hadoop project requires up to … designing and building platforms and supporting applications both in cloud environments and on-premises. These resources are expected to be open-source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during … delivery. Job responsibilities: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open-source platform based on Apache framework for Hadoop. Experience in integrating Infra-as-a-Code in their platform (Bespoke implementation from scratch) Experience in design & architect work for the open More ❯
want you to find your spark. Because that’s what drives you to be better, be more and ultimately, be more fulfilled. Job title: Apache Hadoop Administrator Work model: Hybrid Location: London, UK Mode of Job: Fulltime/contract Job description: The Apache Hadoop project requires up to … designing and building platforms and supporting applications both in cloud environments and on-premises. These resources are expected to be open-source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during … delivery. Job responsibilities: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open-source platform based on Apache framework for Hadoop. Experience in integrating Infra-as-a-Code in their platform (Bespoke implementation from scratch) Experience in design & architect work for the open More ❯
want you to find your spark. Because that’s what drives you to be better, be more and ultimately, be more fulfilled. Job title: Apache Hadoop Administrator Work model: Hybrid Location: London, UK Mode of Job: Fulltime/contract Job description: The Apache Hadoop project requires up to … designing and building platforms and supporting applications both in cloud environments and on-premises. These resources are expected to be open-source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during … delivery. Job responsibilities: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open-source platform based on Apache framework for Hadoop. Experience in integrating Infra-as-a-Code in their platform (Bespoke implementation from scratch) Experience in design & architect work for the open More ❯
to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML Preferred Experience working with Databricks & Apache Spark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter ,LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive More ❯
to non-technical and technical audiences alike. Passion for collaboration, life-long learning, and driving business value through ML. Preferred Experience working with Databricks & Apache Spark to process large-scale distributed datasets. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and Apache Spark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our More ❯
driving business value through ML Company first focus and collaborative individuals - we work better when we work together. Preferred Experience working with Databricks and Apache Spark Preferred Experience working in a customer-facing role About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide … data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn, and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits More ❯
Develocity is a first-of-its-kind product that software teams use to accelerate and optimize Gradle, Apache Maven, Bazel, and sbt builds. It comprises several facets, including large-volume data ingestion and processing, complex data analysis and visualization, and distributed caching and execution systems. Our software is used … other major customers across all verticals. We regularly collaborate with these and other users to improve our products continuously. We have partnered with the Apache Software Foundation, the Micronaut Foundation, and other OSS projects like Spring, Quarkus, Kotlin Compiler, JUnit, AndroidX, etc., to bring the values of Develocity also More ❯