delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
stack technologies including; Java, JavaScript, TypeScript, React, APIs, MongoDB, Elastic Search, DMN, BPMN and Kubernetes leverage data-streaming technologies including Kafka CDC, Kafka topic and related technologies, EMS, ApacheFlink be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, often dealing with large data sets, including real-time processing More ❯
with a view to becoming an expert BS degree in Computer Science or meaningful relevant work experience Preferred Qualifications Experience with large scale data platform infrastructure such as Spark, Flink, HDFS, AWS/S3, Parquet, Kubernetes is a plus More ❯
and constructive feedback to foster accountability, growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments: Proficient with large-scale More ❯
and constructive feedback to foster accountability, growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments: Proficient with large-scale More ❯
data or backend engineering, while growing the ability to work effectively across both. Experience with processing large-scale transactional and financial data, using batch/streaming frameworks like Spark, Flink, or Beam (with Scala for data engineering), and building scalable backend systems in Java. You possess a foundational understanding of system design, data structures, and algorithms, coupled with a More ❯
Java, data structures and concurrency, rather than relying on frameworks such as Spring. You have built event-driven applications using Kafka and solutions with event-streaming frameworks at scale (Flink/Kafka Streams/Spark) that go beyond basic ETL pipelines. You know how to orchestrate the deployment of applications on Kubernetes, including defining services, deployments, stateful sets etc. More ❯
Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies … related field. Experience with data visualization tools and libraries (e.g., Tableau, D3.js, Mapbox, Leaflet) for displaying geospatial insights and analytics. Familiarity with real-time stream processing frameworks (e.g., ApacheFlink, Kafka Streams). Experience with geospatial data processing libraries (e.g., GDAL, Shapely, Fiona). Background in defense, national security, or environmental monitoring applications is a plus. Compensation and Benefits More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on AWS. Our More ❯
Craft: Data, Analytics & Strategy Job Description: Activision Blizzard Media is the gateway for brands to the cross-platform gaming company in the western world, with hundreds of millions of players across over 190 countries. Our legendary portfolio includes iconic mobile More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, ApacheFlink and AWS Step functions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and price changes, trades and reference data … as complex stored procedures and patterns, preferably in SQL Server. Snowflake Database experience can be valuable and would help the team in the data migration process. Knowledge of ApacheFlink or Kafka highly desirable or similar technologies (e.g. Apache Spark) Skills in C# WPF or Javascript GUI development beneficial, but not essential. Excellent communication skills. Mathematical. Finance industry experience More ❯
the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by a relentless focus on delivering real value … Knowledge of building and maintaining CI/CD pipelines for efficient software delivery. Nice to have: Coding skills in Python Knowledge of other areas of our tech stack (GitLab, Flink, Helm, FluxCD etc.) Knowledge of enterprise security best practices Proven experience in leading successful technical projects with an infrastructure/platform focus. Ability to effectively communicate technical concepts to More ❯
in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems … on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data … batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with ApacheFlink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and performance. … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of ApacheFlink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of ApacheFlink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of ApacheFlink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
We are a leading global asset management firm with over 3,000 employees across 20 offices in 15 countries; we help millions of investors around the world pursue their financial goals. We hire critical thinkers. People who thrive in a More ❯
of the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like FastAPI, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres, Flink on AWS, React & Typescript. We operate a fully Python stack except for frontend and infrastructure code. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. … with Helm and Flux) for managing services GitLab for CI/CD and version control AWS as our infrastructure platform PostgreSQL for application data and event sourcing architecture ApacheFlink for real-time service interactions and state management Responsibilities Collaborate with Engineers, Product Managers, and the Engagement team to understand problem spaces, contribute to solution design, and support the … environments (e.g. AWS). Solid RDBMS experience, preferably with PostgreSQL Experience building RESTful APIs (e.g. FastAPI) and real-time data processing pipelines Bonus points for experience with Kubernetes, ApacheFlink, Flux or Infrastructure-as-Code frameworks (e.g. Terraform). Experience of maintaining your own code in a production environment. A good foundational understanding of modern software development lifecycles, including More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
Compliance Engineering - Full Stack Software Engineer - Associate - Birmingham Associate - Compliance Engineering - Full Stack Software Engineer YOUR IMPACT Developing mission-critical, high-quality software solutions using cutting-edge technology in a dynamic environment. OUR IMPACT We are Compliance Engineering, a global More ❯