Comfortable using Git; an awareness of CI/CD practices and tools such as GitHub Actions or Azure DevOps Nice to have: Experience of working with Apache Spark/Flink/Kafka Familiarity with object storage e.g. AWS S3 Knowledge of containerised development workflows using e.g., VSCode Basic understanding of cloud platforms like AWS or GCP Experience contributing to More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid / WFH Options
Vivedia Ltd
pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to create meaningful business impact. Soft Skills: Excellent communication and More ❯
or similar quantitative field, or equivalent experience 5+ years of professional programming in Scala, Python, and etc. 3+ years of big data development experience with technical stacks like Spark, Flink, Airflow, Singlestore, Kafka and AWS big data technologies Deep understanding of data modeling, distributed systems, and performance optimization Knowledge of system, application design and architecture Experience of building industry More ❯
Israel. Willingness and ability to travel abroad. Bonus Points: Knowledge and hands-on experience of Office 365 - A big advantage. Experience in Kafka, and preferably some exposure to ApacheFlink, is a plus. Why Join Semperis? You'll be part of a global team on the front lines of cybersecurity innovation. At Semperis, we celebrate curiosity, integrity, and people More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
Liverpool, Merseyside, England, United Kingdom Hybrid / WFH Options
red recruitment
to manage secure, high-performance database environments. Excellent communication and cross-functional collaboration skills. A passion for continuous learning and innovation. Desirable: AzureSynapse/Sharedo/Databricks Python/Flink/Kafka technology If you are interested in this Senior Database Developer position and have the relevant skills and experience required, please apply now! Red Recruitment (Agency More ❯
required in the role; we are happy to support your learning on the job, but prior experience is a plus: Experience with large-scale data processing frameworks (e.g., Spark, Flink). Experience with time series analysis, anomaly detection, or graph analytics in a security context. Proficiency in data visualization tools and techniques to effectively communicate complex findings. A basic More ❯
enterprise-scale Business Planning Software solutions. Your Impact Design, build, and operate platform capabilities supporting batch, streaming, and AI-driven workloads Develop resilient and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native technologies Collaborate with AI/ML teams to deploy models and enable generative AI use cases Implement integrations with data lakes and event stores to … 8+ years of hands-on experience in software engineering, especially in platform/backend systems Expert-level skills in Java and strong proficiency in Python Experience with Apache Kafka, Flink, and Pulsar for building distributed data pipelines Familiarity with scalable data storage and data lake integrations Proven ability to integrate AI/ML models and work with prompt-based More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯