diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations More ❯
are recognised by industry leaders like Gartner's Magic Quadrant, Forrester Wave and Frost Radar. Our tech stack: Superset and similar data visualisation tools. ETL tools: Airflow, DBT, Airbyte, Flink, etc. Data warehousing and storage solutions: ClickHouse, Trino, S3. AWS Cloud, Kubernetes, Helm. Relevant programming languages for data engineering tasks: SQL, Python, Java, etc. What you will be doing More ❯
Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data Pipelines: Experience with batch and streaming frameworks (e.g., Apache Spark, Flink, Kafka Streams, Beam), including orchestration via Airflow, Prefect or Dagster. Data Modeling & Schema Management: Demonstrated expertise in designing, evolving, and documenting schemas (OLAP/OLTP, dimensional, star/snowflake More ❯
Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data Pipelines: Experience with batch and streaming frameworks (e.g., Apache Spark, Flink, Kafka Streams, Beam), including orchestration via Airflow, Prefect or Dagster. Data Modeling & Schema Management: Demonstrated expertise in designing, evolving, and documenting schemas (OLAP/OLTP, dimensional, star/snowflake More ❯
Experience working in environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Salary 65-75k Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work More ❯
in a high-ownership, fast-paced environment. Nice to have: Experience working in the Payments, Fintech, or Financial Crime domain (e.g., fraud detection, AML, KYC). Experience with ApacheFlink or other streaming data frameworks is highly desirable. Experience working in teams, building and maintaining data science and AI solutions. Experience integrating with third-party APIs, especially in regulated More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯
with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., ApacheFlink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and security … pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as ApacheFlink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. Additional … following prior to applying to GSR? Experience level, applicable to this role? Select How many years have you designed, built, and operated stateful, exactly once streaming pipelines in ApacheFlink (or an equivalent framework such as Spark Structured Streaming or Kafka Streams)? Select Which statement best describes your hands on responsibility for architecting and tuning cloud native data lake More ❯
join a fast-growing team that plays an integral part of the revenue producing arm of a company, then our team is for you. Technologies include Scala, Python, ApacheFlink, Spark, Databricks, and AWS (ECS, Lambda, DynamoDB, WAF, among others). Experience in these areas is preferred but not required. Qualifications: You collaborate with team members and project management More ❯
with big data technologies ( e.g. , Spark, Hadoop)Background in time-series analysis and forecastingExperience with data governance and security best practicesReal-time data streaming is a plus (Kafka, Beam, Flink)Experience with Kubernetes is a plusEnergy/maritime domain knowledge is a plus What We Offer Competitive salary commensurate with experience and comprehensive benefits package (medical, dental, vision) Significant More ❯
and constructive feedback to foster accountability, growth, and collaboration within the team. Who You Are Experienced with Data Processing Frameworks: Skilled with higher-level JVM-based frameworks such as Flink, Beam, Dataflow, or Spark. Comfortable with Ambiguity: Able to work through loosely defined problems and thrive in autonomous team environments. Skilled in Cloud-based Environments: Proficient with large-scale More ❯
data or backend engineering, while growing the ability to work effectively across both. Experience with processing large-scale transactional and financial data, using batch/streaming frameworks like Spark, Flink, or Beam (with Scala for data engineering), and building scalable backend systems in Java. You possess a foundational understanding of system design, data structures, and algorithms, coupled with a More ❯
Familiarity with geospatial data formats (e.g., GeoJSON, Shapefiles, KML) and tools (e.g., PostGIS, GDAL, GeoServer). Technical Skills: Expertise in big data frameworks and technologies (e.g., Hadoop, Spark, Kafka, Flink) for processing large datasets. Proficiency in programming languages such as Python, Java, or Scala, with a focus on big data frameworks and APIs. Experience with cloud services and technologies … related field. Experience with data visualization tools and libraries (e.g., Tableau, D3.js, Mapbox, Leaflet) for displaying geospatial insights and analytics. Familiarity with real-time stream processing frameworks (e.g., ApacheFlink, Kafka Streams). Experience with geospatial data processing libraries (e.g., GDAL, Shapely, Fiona). Background in defense, national security, or environmental monitoring applications is a plus. Compensation and Benefits More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
Craft: Data, Analytics & Strategy Job Description: Activision Blizzard Media is the gateway for brands to the cross-platform gaming company in the western world, with hundreds of millions of players across over 190 countries. Our legendary portfolio includes iconic mobile More ❯
including Java, SQL Server/Snowflake databases, Python and C#. We are in the process of migrating more of our data to Snowflake, leveraging technologies like AWS Batch, ApacheFlink and AWS Step functions for orchestration and Docker containers. These new systems will respond in real-time to events such as position and price changes, trades and reference data … as complex stored procedures and patterns, preferably in SQL Server. Snowflake Database experience can be valuable and would help the team in the data migration process. Knowledge of ApacheFlink or Kafka highly desirable or similar technologies (e.g. Apache Spark) Skills in C# WPF or Javascript GUI development beneficial, but not essential. Excellent communication skills. Mathematical. Finance industry experience More ❯
the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by a relentless focus on delivering real value … Knowledge of building and maintaining CI/CD pipelines for efficient software delivery. Nice to have: Coding skills in Python Knowledge of other areas of our tech stack (GitLab, Flink, Helm, FluxCD etc.) Knowledge of enterprise security best practices Proven experience in leading successful technical projects with an infrastructure/platform focus. Ability to effectively communicate technical concepts to More ❯
in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems … on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data … batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with ApacheFlink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
a trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and performance. … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role*Strong knowledge of ApacheFlink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: Experience in a data-focused SRE, Data Platform, or DevOps role Strong knowledge of ApacheFlink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate … will have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, ApacheFlink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture … various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in a data-focused SRE, Data Platform, or DevOps role *Strong knowledge of ApacheFlink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference More ❯
of the biggest names in the insurance industry. We are developing a modern real-time ML platform using technologies like FastAPI, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres, Flink on AWS, React & Typescript. We operate a fully Python stack except for frontend and infrastructure code. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. … with Helm and Flux) for managing services GitLab for CI/CD and version control AWS as our infrastructure platform PostgreSQL for application data and event sourcing architecture ApacheFlink for real-time service interactions and state management Responsibilities Collaborate with Engineers, Product Managers, and the Engagement team to understand problem spaces, contribute to solution design, and support the … environments (e.g. AWS). Solid RDBMS experience, preferably with PostgreSQL Experience building RESTful APIs (e.g. FastAPI) and real-time data processing pipelines Bonus points for experience with Kubernetes, ApacheFlink, Flux or Infrastructure-as-Code frameworks (e.g. Terraform). Experience of maintaining your own code in a production environment. A good foundational understanding of modern software development lifecycles, including More ❯