continuous improvement across the team 🧰 What You’ll Need Strong experience leading data engineering teams in high-growth environments Deep expertise with real-time data processing tools (e.g. Kafka, Flink, Spark Streaming) Solid hands-on knowledge of cloud platforms (AWS, GCP or Azure) Strong proficiency in languages like Python, Java or Scala Familiarity with orchestration tools such as Airflow More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Atarus
continuous improvement across the team What You’ll Need Strong experience leading data engineering teams in high-growth environments Deep expertise with real-time data processing tools (e.g. Kafka, Flink, Spark Streaming) Solid hands-on knowledge of cloud platforms (AWS, GCP or Azure) Strong proficiency in languages like Python, Java or Scala Familiarity with orchestration tools such as Airflow More ❯
london, south east england, united kingdom Hybrid / WFH Options
Atarus
continuous improvement across the team 🧰 What You’ll Need Strong experience leading data engineering teams in high-growth environments Deep expertise with real-time data processing tools (e.g. Kafka, Flink, Spark Streaming) Solid hands-on knowledge of cloud platforms (AWS, GCP or Azure) Strong proficiency in languages like Python, Java or Scala Familiarity with orchestration tools such as Airflow More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Atarus
continuous improvement across the team 🧰 What You’ll Need Strong experience leading data engineering teams in high-growth environments Deep expertise with real-time data processing tools (e.g. Kafka, Flink, Spark Streaming) Solid hands-on knowledge of cloud platforms (AWS, GCP or Azure) Strong proficiency in languages like Python, Java or Scala Familiarity with orchestration tools such as Airflow More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Atarus
continuous improvement across the team 🧰 What You’ll Need Strong experience leading data engineering teams in high-growth environments Deep expertise with real-time data processing tools (e.g. Kafka, Flink, Spark Streaming) Solid hands-on knowledge of cloud platforms (AWS, GCP or Azure) Strong proficiency in languages like Python, Java or Scala Familiarity with orchestration tools such as Airflow More ❯
databases, including PostgreSQL, ClickHouse, Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency in containerization technologies (Docker, Kubernetes) and More ❯
and non-relational databases. Qualifications/Nice to have Experience with a messaging middleware platform like Solace, Kafka or RabbitMQ. Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark More ❯
programming skills (Python, Java, C++) and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with natural language processing libraries such as More ❯
and experiences of SDLC methodologies e.g. Agile, Waterfall Skilled in business requirements analysis with ability to translate business information into technical specifications Skills Required (desirable): Knowledge of streaming services – Flink, Kafka Knowledge of Dimensional Modelling Knowledge of No Sql dbs (dynamo db, Cassandra) Knowledge of node based architecture, graph databases and languages – Neptune, Neo4j, Gremlin, Cypher Experience 5+ years More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯
To be successful in this role, you should meet the following requirements: Sheffield office attendance is mandatory, 3 days per week. Experience with working with Cloud Computing (Google Cloud Platform preferable). Strong SQL skills and proficiency in at least More ❯
Data and data-related Cloud services (AWS/Azure/GCP) Good hands-on experience in at least one distributed data processing framework e.g. Spark (Core, Streaming, SQL), Storm, Flink etc. Expertise with one or more of Java (preferable), Scala, and Python programming languages Good data modeling experience to address scale and read/write performance Hands-on working More ❯
London, England, United Kingdom Hybrid / WFH Options
Cleo
code quality). Experience with containerisation and orchestration (Docker and Kubernetes). Infrastructure as Code (Terraform or similar). Experience with at least one distributed data-processing framework (Spark, Flink, Kafka, etc.). Familiarity with different storage solutions (e.g., OLTP, OLAP, NoSQL, object storage) and their trade-offs. Product mindset and ability to link technical decisions to business impact. More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Axiom Software Solutions Limited
resiliency, and scalability, including understanding and explaining features like KRAFT. Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink, and Beam. Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs. Implementing security measures to protect Kafka clusters and data streams. Monitoring … as Spark Required Skills & Experience Extensive experience with Apache Kafka and real-time architecture including event-driven frameworks. Strong knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink, and Beam. Experience with cloud platforms such as GCP Pub/Sub. Excellent problem-solving skills. Knowledge & Experience/Qualifications Knowledge of Kafka data pipelines and messaging solutions to More ❯
generation (SSG) in Next.js Experience with testing frameworks like Jest, Cypress, or React Testing Library. Experience with authentication strategies using OAuth, JWT, or Cognito Familiarity with Apache Spark/Flink for real-time data processing is an advantage. Hands-on experience with CI/CD tools Commercial awareness and knowledge of public sector. Excellent communicator, able to interact with More ❯
SQL, Javascript) Experience of Azure, AWS or GCP cloud platforms and Data Lake/Warehousing Platforms such as Snowflake, Iceberg etc Experience of various ETL and Streaming Tools (fiveTran, Flink, Spark) Experience of a variety of data mining techniques (APIs, GraphQL, Website Scraping) Ability to translate Data into meaningful insights Excellent verbal and written communication skills Understanding of modern More ❯
Job Description Out of the successful launch of Chase in 2021, we’re a new team with a new mission. We’re creating products that solve real-world problems and put customers at the center—all in an environment that More ❯
London, England, United Kingdom Hybrid / WFH Options
Wikimedia Foundation
systems. Experience with privacy-sensitive data and security best practices. Proven success in managing ambiguous projects and collaborating across teams. Knowledge of scalable data processing frameworks (e.g., Spark, Kafka, Flink). Desired Qualities Commitment to Wikimedia's mission and values. Strong problem-solving and leadership skills. Excellent communication skills. Decision-making ability in complex, uncertain environments. Curiosity, continuous learning More ❯
VCS (git), and Linux Proven experience in cross-functional teams and able to communicate effectively about technical and operational challenges. Preferred Qualifications: Proficiency with scalable data frameworks (Spark, Kafka, Flink) Proven Expertise with Infrastructure as Code and Cloud best practices Proficiency with monitoring and logging tools (e.g., Prometheus, Grafana) Working at Lila Sciences, you would have access to advanced More ❯
innovation and keeping teams informed of industry advancements Skilled in business requirements analysis with ability to translate business information into technical specifications Skills Required (desirable): Knowledge of streaming services - Flink, Kafka Familiarity of Dimensional Modelling concepts Knowledge of node based architecture, graph databases and languages - Neptune, Neo4j, Gremlin, Cypher Experience 8+ years of experience with Databricks product suite, Spark More ❯
challenges of dealing with large data sets, both structured and unstructured Used a range of open source frameworks and development tools, e.g. NumPy/SciPy/Pandas, Spark, Kafka, Flink Working knowledge of one or more relevant database technologies, e.g. Oracle, Postgres, MongoDB, ArcticDB. Proficient on Linux Advantageous: An excellent understanding of financial markets and instruments An understanding of More ❯
apply basic security principles at infrastructure and application level. You have knowledge of cloud based ML solutions from GCP or AWS Experience with streaming data processing frameworks such as Flink, Beam, Spark, Kafka Streams Experience with Ansible, Terraform, GitHub Actions, Infrastructure as Code, AWS or other cloud ecosystems Knowledge/interest in payment platforms, foreign exchange & complex systems architecture More ❯
approach to driving innovation and keeping teams informed of industry advancements Skilled in business requirements analysis with ability to translate business information into technical specifications Knowledge of streaming services – Flink, Kafka Familiarity of Dimensional Modelling concepts Knowledge of node based architecture, graph databases and languages – Neptune, Neo4j, Gremlin, Cypher Experience 8+ years of experience with Databricks product suite, Spark More ❯