Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batch processing workflows with tools like Apache Spark and Flink for scalable performance. Infrastructure Automation: Implement Infrastructure as Code (IaC) using tools like Terraform and Ansible. Leverage cloud-native services (AWS, Azure) to streamline More ❯
Arlington, Virginia, United States Hybrid / WFH Options
Full Visibility LLC
orchestration (e.g., Apache Airflow, Luigi, Prefect) Strong programming skills in Python, SQL, or Scala Experience with open-source data processing tools (e.g., Kafka, Spark, Flink, Hadoop) Familiarity with database technologies (PostgreSQL, MySQL, or NoSQL solutions) Ability to work in a fast-paced environment with large-scale datasets Preferred: • Experience More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Programming skills in Python More ❯
managing real-time data pipelines across a track record of multiple initiatives. Expertise in developing data backbones using distributed streaming platforms (Kafka, Spark Streaming, Flink, etc.). Experience working with cloud platforms such as AWS, GCP, or Azure for real-time data ingestion and storage. Ability to optimise and More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
Aerospace Corporation
teams toward software development best practices Experience in SQL, NoSQL, Cypher and other big data querying languages Experience with big data frameworks (Hadoop, Spark, Flink etc.) Experience with ML lifecycle management tools (MLflow, Kubeflow, etc.) Familiarity with data pipelining and streaming technologies (Apache Kafka, Apache Nifi, etc.) Demonstrated contributions More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Yelp USA
solutions. Exposure to some of the following technologies: Python, AWS Redshift, AWS Athena/Apache Presto, Big Data technologies (e.g S3, Hadoop, Hive, Spark, Flink, Kafka etc), NoSQL systems like Cassandra, DBT is nice to have. What you'll get: Full responsibility for projects from day one, a collaborative More ❯
efficient integration into Feast feature store. Requirements Good knowledge of programming languages such as Python or Java. Strong experience with streaming technologies (Spark, PySpark, Flink, KSQL or similar) for developing data transformation pipelines. Solid understanding and practical experience with SQL and relational databases (PostgreSQL preferred). Proficiency with AWS More ❯
to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, ApacheFlink, AWS and service oriented architecture. What you'll get: Full responsibility for projects from day one, a collaborative team, and a dynamic work environment. More ❯
to some of the following technologies (or equivalent): Apache Spark, AWS Redshift, AWS S3, Cassandra (and other NoSQL systems), AWS Athena, Apache Kafka, ApacheFlink, AWS, and service-oriented architecture. What you'll get: Full responsibility for projects from day one, a collaborative team, and a dynamic work environment. More ❯
Columbia, Maryland, United States Hybrid / WFH Options
HII Mission Technologies
the Risk Management Framework (RMF) process Proficiency in system design and meticulous documentation Experience in streaming and/or batch analytics (e.g. Kafka, Spark, Flink, Storm, MapReduce, Hadoop) Experience in distributed databases, NoSQL databases, full text-search engines (e.g. Elasticsearch, MongoDB, Solr) Experience in designing enterprise APIs. Experience in More ❯
Columbia, Maryland, United States Hybrid / WFH Options
HII Mission Technologies
related field; or High School Diploma or equivalent and 9 years relevant experience. Experience in streaming and/or batch analytics (e.g. Kafka, Spark, Flink, Storm, MapReduce, Hadoop) Experience in distributed databases, NoSQL databases, full text-search engines (e.g. Elasticsearch, MongoDB, Solr) Experience in designing enterprise APIs Experience in More ❯
Responsibilities: Design and implement highly scalable and reactive backend systems in Java. Utilize reactive programming frameworks like Kafka Streams , Akka , Eclipse Vert.x , or ApacheFlink . Leverage Java concurrency features including multithreading and Executor Services to ensure optimal performance. Apply functional programming paradigms in Java to write clean, efficient More ❯
london, south east england, united kingdom Hybrid / WFH Options
Solytics Partners
Responsibilities: Design and implement highly scalable and reactive backend systems in Java. Utilize reactive programming frameworks like Kafka Streams , Akka , Eclipse Vert.x , or ApacheFlink . Leverage Java concurrency features including multithreading and Executor Services to ensure optimal performance. Apply functional programming paradigms in Java to write clean, efficient More ❯
AND EXPERIENCE The successful Senior Data Platform Engineer will have the following skills and experience: Java Spring Boot Cloud (ideally AWS) Kafka (Kenisis or Flink will be considered) BENEFITS The successful Senior Data Platform Engineer will receive the following benefits: Salary up to £85,000 – depending on experience Semi More ❯
with Docker, Kubernetes, or other container orchestration tools. Familiarity with observability tools (e.g., New Relic) for tracking usage and service health. Experience with Kafka, Flink, or IoT streaming technologies. Background in financial services or other regulated industries. What can you expect? Competitive salary: £120,000 per annum - reflective of More ❯
Grow with us. We are looking for a Machine Learning Engineer to work along the end-to-end ML lifecycle, alongside our existing Product & Engineering team. About Trudenty: The Trudenty Trust Network provides personalised consumer fraud risk intelligence for fraud More ❯
Product Development Work with engineering teams to develop an AI-driven observability and automation platform, leveraging: Telemetry ingestion (Kafka, OpenTelemetry, Fluentd). Streaming analytics (Flink, Spark, CEP engines). AI-driven anomaly detection & automation (AutoGPT, LangChain, MLflow, TensorFlow). Define technical requirements and architecture priorities for engineering teams. Partner More ❯
Java 11+ (ESSENTIAL) Kotlin (Legacy) Messaging Stacks: Kafka (ESSENTIAL) Pulsar - slowly migrating those back to Kafka. Deployment Environment: Kubernetes (EKS) Frameworks: Spring (ESSENTIAL) ApacheFlink Kafka Streams Apache Storm (mostly legacy) Repositories and CI/CD: Gitlab Gitlab CI/CD Data Stores MongoDB About you: Expertise in one … of the major real-time data processing frameworks, such as Flink or Kafka Streams. Experience of building event-driven and/or streaming data services; IoT domain would be great but not essential. Strong programming experience in Java (11+) and show a sense of ownership and pride in your More ❯
insurance industry. We are developing a modern real-time ML platform using technologies like Python, PyTorch, Ray, k8s (helm + flux), Terraform, Postgres and Flink on AWS. We are very big fans of Infrastructure-as-Code and enjoy Agile practices. As a team, we're driven by a relentless … CI/CD pipelines for efficient software delivery. Nice to have: Coding skills in Python Knowledge of other areas of our tech stack (GitLab, Flink, Helm, FluxCD etc.) Knowledge of enterprise security best practices Proven experience in leading successful technical projects with an infrastructure/platform focus. Ability to More ❯
for great people, not just those who simply check off all the boxes. What you'll do: Work with technologies like Apache Lucene, ApacheFlink, Apache Beam, and Kubernetes to build core components of Yelp's search infrastructure. Design, build, and maintain scalable real-time data ingestion and indexing … complexity analysis. Comprehensive understanding of systems and application design, including operational and reliability trade-offs. Experience with distributed data processing frameworks such as ApacheFlink or Apache Beam. Familiarity with search technologies like Apache Lucene or Elasticsearch is a plus. Experience working with containerized environments and orchestration tools like More ❯
Chicago, Illinois, United States Hybrid / WFH Options
Expedia Group
excellence; apply industry standards and innovative technologies to improve efficiency, quality, and system performance Technologies we use: Java, Scala, AWS Cloud Services, GraphQL, Kafka, Flink, Spark, Cassandra, MongoDB, DataDog, Splunk Experience and qualifications: Bachelor's or Master's degree in Computer Science or a related technical field, or equivalent … 6+ years of experience in developing data and software solutions, including ETLs using cloud architectures Proficiency with technologies like Apache Spark, Apache Kafka, ApacheFlink, Splunk, Datadog, NoSQL and SQL databases Experience and exposure to ML/AI technologies Proven leadership in executing vaguely defined, complex projects. Preferred: Experience More ❯