and constructive feedback to foster accountability, growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments: Proficient with large-scale More ❯
data or backend engineering, while growing the ability to work effectively across both. Experience with processing large-scale transactional and financial data, using batch/streaming frameworks like Spark, Flink, or Beam (with Scala for data engineering), and building scalable backend systems in Java. You possess a foundational understanding of system design, data structures, and algorithms, coupled with a More ❯
Java, data structures and concurrency, rather than relying on frameworks such as Spring. You have built event-driven applications using Kafka and solutions with event-streaming frameworks at scale (Flink/Kafka Streams/Spark) that go beyond basic ETL pipelines. You know how to orchestrate the deployment of applications on Kubernetes, including defining services, deployments, stateful sets etc. More ❯
Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies … related field. Experience with data visualization tools and libraries (e.g., Tableau, D3.js, Mapbox, Leaflet) for displaying geospatial insights and analytics. Familiarity with real-time stream processing frameworks (e.g., ApacheFlink, Kafka Streams). Experience with geospatial data processing libraries (e.g., GDAL, Shapely, Fiona). Background in defense, national security, or environmental monitoring applications is a plus. Compensation and Benefits More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on AWS. Our More ❯
Duties Design and implement scalable infrastructure for large-scale data systems (e.g., Kafka, Hadoop, Dremio) Develop, deploy, and oversee data pipelines using technologies such as Java, Python, Spark, and Flink Partner with engineering teams to support data architecture, ingestion strategies, and system scalability Ensure data quality, consistency, and accessibility for internal stakeholders Serve as a subject matter expert in … stream processing and cluster management 2+ years working with large-scale data storage solutions (e.g., S3, HDFS, Databricks, Iceberg) Proficiency with distributed data processing tools like Apache Spark or Flink Strong programming background in Java, Python, and SQL Familiarity with Python-based data science libraries and toolkits Experience deploying applications in containerized environments using Docker and Kubernetes Knowledge of More ❯
Craft: Data, Analytics & Strategy Job Description: Activision Blizzard Media is the gateway for brands to the cross-platform gaming company in the western world, with hundreds of millions of players across over 190 countries. Our legendary portfolio includes iconic mobile More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
Coördinator Informatiebeveiliging en Privacy, een Servicecoördinator en een Informatieregisseur, én word je aangestuurd door een Manager Informatievoorziening. Met wie doe je dat? SintLucas, de vakschool voor betekenisvolle creativiteit, is flink in ontwikkeling. We werken samen aan onze ambities 2027 waar toonaangevend onderwijs, persoonlijk leren, onderzoek en innovatie én verbinding met de buitenwereld centraal staan. Ons team Informatievoorziening is verantwoordelijk More ❯
including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, ApacheFlink and AWS Step functions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and price changes, trades and reference data … as complex stored procedures and patterns, preferably in SQL Server. Snowflake Database experience can be valuable and would help the team in the data migration process. Knowledge of ApacheFlink or Kafka highly desirable or similar technologies (e.g. Apache Spark) Skills in C# WPF or Javascript GUI development beneficial, but not essential. Excellent communication skills. Mathematical. Finance industry experience More ❯
the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by a relentless focus on delivering real value … Knowledge of building and maintaining CI/CD pipelines for efficient software delivery. Nice to have: Coding skills in Python Knowledge of other areas of our tech stack (GitLab, Flink, Helm, FluxCD etc.) Knowledge of enterprise security best practices Proven experience in leading successful technical projects with an infrastructure/platform focus. Ability to effectively communicate technical concepts to More ❯
in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems … on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data … batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with ApacheFlink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and performance. … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of ApacheFlink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of ApacheFlink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of ApacheFlink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
We are a leading global asset management firm with over 3,000 employees across 20 offices in 15 countries; we help millions of investors around the world pursue their financial goals. We hire critical thinkers. People who thrive in a More ❯
of the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like FastAPI, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres, Flink on AWS, React & Typescript. We operate a fully Python stack except for frontend and infrastructure code. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. … with Helm and Flux) for managing services GitLab for CI/CD and version control AWS as our infrastructure platform PostgreSQL for application data and event sourcing architecture ApacheFlink for real-time service interactions and state management Responsibilities Collaborate with Engineers, Product Managers, and the Engagement team to understand problem spaces, contribute to solution design, and support the … environments (e.g. AWS). Solid RDBMS experience, preferably with PostgreSQL Experience building RESTful APIs (e.g. FastAPI) and real-time data processing pipelines Bonus points for experience with Kubernetes, ApacheFlink, Flux or Infrastructure-as-Code frameworks (e.g. Terraform). Experience of maintaining your own code in a production environment. A good foundational understanding of modern software development lifecycles, including More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯
features Use big data technologies (e.g. Spark, Hadoop, HBase, Cassandra) to build large scale machine learning pipelines Develop new systems on top of real-time streaming technologies (e.g. Kafka, Flink) 5+ years software development experience 5+ years experience in Java, Shell, Python development Excellent knowledge of Relational Databases, SQL and ORM technologies (JPA2, Hibernate) is a plus Experience in … Cassandra, HBase, Flink, Spark or Kafka is a plus. Experience in the Spring Framework is a plus Experience with test-driven development is a plus Must be located in Ireland More ❯
principper: Erfaring med MACH-arkitektur (Microservices, API-first, Cloud-native, Headless) for at sikre effektive, skalerbare og fremtidssikrede lsninger. Erfaring med integration af forskellige APIer og SaaS-tjenester. En 'Flink og Flittig' arbejdsplads Hos Novicell har vi erstattet lange medarbejderhndbger og gammeldags regler med dialog, ansvar og tillid. Vi tror p, at sociale relationer skaber et endnu bedre arbejdsmilj … os p at have det godt, mens vi leverer de bedst mulige resultater ellers ville det ikke give mening at bruge s mange vgne timer sammen. Novicells motto er flink og flittig , hvilket betyder, at vi behandler hinanden godt, samtidig med at vi yder den bedst mulige service til vores kunder. Konkret betyder det, at vi tilbyder: En uformel More ❯
Technology Product Manager, Enterprise Services - Financial Solutions Location New York Business Area Sales and Client Service Ref # Description & Requirements Bloomberg's Enterprise Technology team is responsible for ensuring clients can robustly connect, integrate and develop with Bloomberg's capabilities More ❯
Technology Product Manager, Enterprise Technology Location London Business Area Product Ref # Description & Requirements Bloomberg's Enterprise Technology team is responsible for ensuring clients can robustly connect, integrate and develop with Bloomberg's capabilities to establish & ensure the flow of More ❯