data or backend engineering, while growing the ability to work effectively across both. Experience with processing large-scale transactional and financial data, using batch/streaming frameworks like Spark, Flink, or Beam (with Scala for data engineering), and building scalable backend systems in Java. You possess a foundational understanding of system design, data structures, and algorithms, coupled with a More ❯
Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies … related field. Experience with data visualization tools and libraries (e.g., Tableau, D3.js, Mapbox, Leaflet) for displaying geospatial insights and analytics. Familiarity with real-time stream processing frameworks (e.g., ApacheFlink, Kafka Streams). Experience with geospatial data processing libraries (e.g., GDAL, Shapely, Fiona). Background in defense, national security, or environmental monitoring applications is a plus. Compensation and Benefits More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on AWS. Our More ❯
Grow with us. We are looking for a Machine Learning Engineer to work along the end-to-end ML lifecycle, alongside our existing Product & Engineering team. About Trudenty: The Trudenty Trust Network provides personalised consumer fraud risk intelligence for fraud More ❯
Craft: Data, Analytics & Strategy Job Description: Activision Blizzard Media is the gateway for brands to the cross-platform gaming company in the western world, with hundreds of millions of players across over 190 countries. Our legendary portfolio includes iconic mobile More ❯
including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, ApacheFlink and AWS Step functions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and price changes, trades and reference data … as complex stored procedures and patterns, preferably in SQL Server. Snowflake Database experience can be valuable and would help the team in the data migration process. Knowledge of ApacheFlink or Kafka highly desirable or similar technologies (e.g. Apache Spark) Skills in C# WPF or Javascript GUI development beneficial, but not essential. Excellent communication skills. Mathematical. Finance industry experience More ❯
the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by a relentless focus on delivering real value … Knowledge of building and maintaining CI/CD pipelines for efficient software delivery. Nice to have: Coding skills in Python Knowledge of other areas of our tech stack (GitLab, Flink, Helm, FluxCD etc.) Knowledge of enterprise security best practices Proven experience in leading successful technical projects with an infrastructure/platform focus. Ability to effectively communicate technical concepts to More ❯
in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems … on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data … batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with ApacheFlink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and performance. … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of ApacheFlink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of ApacheFlink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of ApacheFlink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
of the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like FastAPI, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres, Flink on AWS, React & Typescript. We operate a fully Python stack except for frontend and infrastructure code. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. … with Helm and Flux) for managing services GitLab for CI/CD and version control AWS as our infrastructure platform PostgreSQL for application data and event sourcing architecture ApacheFlink for real-time service interactions and state management Responsibilities Collaborate with Engineers, Product Managers, and the Engagement team to understand problem spaces, contribute to solution design, and support the … environments (e.g. AWS). Solid RDBMS experience, preferably with PostgreSQL Experience building RESTful APIs (e.g. FastAPI) and real-time data processing pipelines Bonus points for experience with Kubernetes, ApacheFlink, Flux or Infrastructure-as-Code frameworks (e.g. Terraform). Experience of maintaining your own code in a production environment. A good foundational understanding of modern software development lifecycles, including More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
Compliance Engineering - Full Stack Software Engineer - Associate - Birmingham Associate - Compliance Engineering - Full Stack Software Engineer YOUR IMPACT Developing mission-critical, high-quality software solutions using cutting-edge technology in a dynamic environment. OUR IMPACT We are Compliance Engineering, a global More ❯
principper: Erfaring med MACH-arkitektur (Microservices, API-first, Cloud-native, Headless) for at sikre effektive, skalerbare og fremtidssikrede lsninger. Erfaring med integration af forskellige APIer og SaaS-tjenester. En 'Flink og Flittig' arbejdsplads Hos Novicell har vi erstattet lange medarbejderhndbger og gammeldags regler med dialog, ansvar og tillid. Vi tror p, at sociale relationer skaber et endnu bedre arbejdsmilj … os p at have det godt, mens vi leverer de bedst mulige resultater ellers ville det ikke give mening at bruge s mange vgne timer sammen. Novicells motto er flink og flittig , hvilket betyder, at vi behandler hinanden godt, samtidig med at vi yder den bedst mulige service til vores kunder. Konkret betyder det, at vi tilbyder: En uformel More ❯