an industry standard RDBMS, NoSQL Experience in designing, and building highly scalable, fault tolerant distributed systems and services for large web sites Experience with big data technologies like Spark, Flink, and Kafka 5+ years of experience in building large scale, distributed web platforms/APIs with 2+ years as a lead developer responsible for specific functional areas. Experiences in More ❯
/ML platforms or other advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Benefits Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the Best Places to Work 2024 by the Sunday Times for being a More ❯
Erfaring med MACH-arkitektur (Microservices, API-first, Cloud-native, Headless) for at sikre effektive, skalerbare og fremtidssikrede løsninger. Erfaring med integration af forskellige API’er og SaaS-tjenester. En 'Flink og Flittig' arbejdsplads Hos Novicell har vi erstattet lange medarbejderhåndbøger og gammeldags regler med dialog, ansvar og tillid. Vi tror på, at sociale relationer skaber et endnu bedre arbejdsmiljø … os på at have det godt, mens vi leverer de bedst mulige resultater – ellers ville det ikke give mening at bruge så mange vågne timer sammen. Novicells motto er flink og flittig , hvilket betyder, at vi behandler hinanden godt, samtidig med at vi yder den bedst mulige service til vores kunder. Konkret betyder det, at vi tilbyder: En uformel More ❯
the ground up. Familiarity with AWS services like S3, EMR, and technologies like Terraform and Docker. Know the ins and outs of current big data frameworks like Spark or Flink, but this is not an absolute requirement - you’re a quick learner! This role is open to individuals based in or willing to relocate to London Seniority level Seniority More ❯
the ground up. Familiarity with AWS services like S3, EMR, and technologies like Terraform and Docker. Know the ins and outs of current big data frameworks like Spark or Flink, but this is not an absolute requirement - you’re a quick learner! This role is open to individuals based in or willing to relocate to London. #J-18808-Ljbffr More ❯
Head of Data & Analytics Architecture and AI page is loaded Head of Data & Analytics Architecture and AI Apply locations Chiswick Park time type Full time posted on Posted 30+ Days Ago job requisition id JR19765 Want to help us bring More ❯
Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, ApacheFlink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data More ❯
diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
WorksHub
us achieve our objectives. So each team leverages the technology that fits their needs best. You'll see us working with data processing/streaming like Kinesis, Spark and Flink; application technologies like PostgreSQL, Redis & DynamoDB; and breaking things using in-house chaos principles and tools such as Gatling to drive load all deployed and hosted on AWS. Our More ❯
Head of Data & Analytics Architecture and AI page is loaded Head of Data & Analytics Architecture and AI Apply locations Chiswick Park time type Full time posted on Posted 30+ Days Ago job requisition id JR19765 Want to help us bring More ❯
/ML platforms or other advanced analytics infrastructure. Familiarity with infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with modern data engineering technologies (e.g., Kafka, Spark, Flink, etc.). Benefits Why join YouLend? Award-Winning Workplace: YouLend has been recognised as one of the Best Places to Work 2024 by the Sunday Times for being a More ❯
london, south east england, united kingdom Hybrid / WFH Options
BGL Group
non-technical stakeholders • A background in software engineering, MLOps, or data engineering with production ML experience Nice to have: • Familiarity with streaming or event-driven ML architectures (e.g. Kafka, Flink, Spark Structured Streaming) • Experience working in regulated domains such as insurance, finance, or healthcare • Exposure to large language models (LLMs), vector databases, or RAG pipelines • Experience building or managing More ❯
london, south east england, united kingdom Hybrid / WFH Options
Citigroup Inc
to support real-time decision-making in the FX market. Building such systems is highly challenging and provides opportunities to work with modern technologies like NoSQL databases, Kafka, ApacheFlink, and more. It offers significant opportunities for growth, leadership, and innovation, as well as direct interaction with clients and business teams to deliver impactful solutions in the FX market. More ❯
Erfaring med MACH-arkitektur (Microservices, API-first, Cloud-native, Headless) for at sikre effektive, skalerbare og fremtidssikrede løsninger. Erfaring med integration af forskellige API’er og SaaS-tjenester. En 'Flink og Flittig' arbejdsplads Hos Novicell har vi erstattet lange medarbejderhåndbøger og gammeldags regler med dialog, ansvar og tillid. Vi tror på, at sociale relationer skaber et endnu bedre arbejdsmiljø … os på at have det godt, mens vi leverer de bedst mulige resultater – ellers ville det ikke give mening at bruge så mange vågne timer sammen. Novicells motto er flink og flittig , hvilket betyder, at vi behandler hinanden godt, samtidig med at vi yder den bedst mulige service til vores kunder. Konkret betyder det, at vi tilbyder: En uformel More ❯
Experience working in environments with AI/ML components or interest in learning data workflows for ML applications . Bonus if you have e xposure to Kafka, Spark, or Flink . Experience with data compliance regulations (GDPR). What you can expect from us: Salary 65-75k Opportunity for annual bonuses Medical Insurance Cycle to work scheme Work More ❯
with demonstrated ability to solve complex distributed systems problems independently Experience building infrastructure for large-scale data processing pipelines (both batch and streaming) using tools like Spark, Kafka, ApacheFlink, Apache Beam, and with proprietary solutions like Nebius Experience designing and implementing large-scale data storage systems (feature stores, timeseries DBs) for ML use cases, with strong familiarity with More ❯
in data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, ApacheFlink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems … on-call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data … batch environments. Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with ApacheFlink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance More ❯
Vortexa is a fast-growing international technology business founded to solve the immense information gap that exists in the energy industry. By using massive amounts of new satellite data and pioneering work in artificial intelligence, Vortexa creates an unprecedented view More ❯
Grow with us. We are looking for a Machine Learning Engineer to work along the end-to-end ML lifecycle, alongside our existing Product & Engineering team. About Trudenty: The Trudenty Trust Network provides personalised consumer fraud risk intelligence for fraud More ❯
with big data technologies ( e.g. , Spark, Hadoop)Background in time-series analysis and forecastingExperience with data governance and security best practicesReal-time data streaming is a plus (Kafka, Beam, Flink)Experience with Kubernetes is a plusEnergy/maritime domain knowledge is a plus What We Offer Competitive salary commensurate with experience and comprehensive benefits package (medical, dental, vision) Significant More ❯
non-technical stakeholders A background in software engineering, MLOps, or data engineering with production ML experience Nice to have: Familiarity with streaming or event-driven ML architectures (e.g. Kafka, Flink, Spark Structured Streaming) Experience working in regulated domains such as insurance, finance, or healthcare Exposure to large language models (LLMs), vector databases, or RAG pipelines Experience building or managing More ❯
non-technical stakeholders • A background in software engineering, MLOps, or data engineering with production ML experience Nice to have: • Familiarity with streaming or event-driven ML architectures (e.g. Kafka, Flink, Spark Structured Streaming) • Experience working in regulated domains such as insurance, finance, or healthcare • Exposure to large language models (LLMs), vector databases, or RAG pipelines • Experience building or managing More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯