diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
Excellent stakeholder management and documentation skills Team leadership experience with ability to mentor and develop engineering talent Nice to haves: Knowledge of data streaming platforms such as Kafka or Flink Exposure to graph databases or vector database technologies Professional certifications in Azure or AWS cloud platforms If you're ready to take the lead on transformative data engineering projects More ❯
LLM-based code assistants, retrieval-augmented generation). Experience with observability tools (Prometheus, Grafana, ELK, OpenTelemetry) and applying AI for intelligent alerting. Knowledge of big data frameworks (Kafka, Spark, Flink) for data-driven AI use cases. Background in finance, justice, or enterprise-scale digital transformation projects. What We Offer Opportunity to lead high-impact engineering teams delivering innovative Java More ❯
quality, security, and best practices Collaborate with cross-functional teams Implement and manage MLOps capabilities Essential Skills: Advanced Python programming skills Expertise in data engineering tools and frameworks (ApacheFlink) Hands-on AWS experience (Serverless, CloudFormation, CDK) Strong understanding of containerization, CI/CD, and DevOps Modern data storage knowledge Sound like you? Please get your CV over ASAP. More ❯
leading data and ML platform infrastructure, balancing maintenance with exciting greenfield projects. develop and maintain our real-time model serving infrastructure, utilising technologies such as Kafka, Python, Docker, ApacheFlink, Airflow, and Databricks. Actively assist in model development and debugging using tools like PyTorch, Scikit-learn, MLFlow, and Pandas, working with models from gradient boosting classifiers to custom GPT More ❯
Israel. Willingness and ability to travel abroad. Bonus Points: Knowledge and hands-on experience of Office 365 - A big advantage. Experience in Kafka, and preferably some exposure to ApacheFlink, is a plus. Why Join Semperis? You'll be part of a global team on the front lines of cybersecurity innovation. At Semperis, we celebrate curiosity, integrity, and people More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
delivering under tight deadlines without compromising quality. Your Qualifications 12+ years of software engineering experience, ideally in platform, infrastructure, or data-centric product development. Expertise in Apache Kafka, ApacheFlink, and/or Apache Pulsar. Deep understanding of event-driven architectures, data lakes, and streaming pipelines. Strong experience integrating AI/ML models into production systems, including prompt engineering More ❯
to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and ApacheFlink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and systems to ensure team alignment and knowledge sharing. Your Qualifications Experience: Professional experience … pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with ApacheFlink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure to frontend frameworks like React More ❯
of the platform Your Qualifications 12+ years of software engineering experience in enterprise-scale, data-centric, or platform environments Deep expertise in distributed data technologies such as Apache Kafka, Flink, and/or Pulsar Strong background in event-driven architectures, streaming pipelines, and data lakes Hands-on experience with AI/ML production systems, including prompt-based LLM integrations More ❯
with a view to becoming an expert BS degree in Computer Science or meaningful relevant work experience Preferred Qualifications Experience with large scale data platform infrastructure such as Spark, Flink, HDFS, AWS/S3, Parquet, Kubernetes is a plus More ❯
Liverpool, Merseyside, United Kingdom Hybrid / WFH Options
Red Recruitment
to manage secure, high-performance database environments. Excellent communication and cross-functional collaboration skills. A passion for continuous learning and innovation. Desirable: AzureSynapse/Sharedo/Databricks Python/Flink/Kafka technology If you are interested in this Senior Database Developer position and have the relevant skills and experience required, please apply now! Red Recruitment (Agency More ❯
plus Experience with Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the More ❯
. Collaborate with a supportive team that values innovation, creativity, and growth. Qualifications Experience with SQL development, SSIS, Azure Data Factory ; Git knowledge is essential. Bonus: experience with Confluent, Flink, Kafka, or AWS . Someone innovative, team-oriented, and ready to push boundaries . Why you'll love it Salary up to £60K plus pension, healthcare, holidays, and bonuses. More ❯
to cross-functional teams, ensuring best practices in data architecture, security and cloud computing Proficiency in data modelling, ETL processes, data warehousing, distributed systems and metadata systems Utilise ApacheFlink and other streaming technologies to build real-time data processing systems that handle large-scale, high-throughput data Ensure all data solutions comply with industry standards and government regulations … not limited to EC2, S3, RDS, Lambda and Redshift. Experience with other cloud providers (e.g., Azure, GCP) is a plus In-depth knowledge and hands-on experience with ApacheFlink for real-time data processing Proven experience in mentoring and managing teams, with a focus on developing talent and fostering a collaborative work environment Strong ability to engage with More ❯
the platform. Your Impact Build and maintain core platform capabilities that support high-throughput batch, streaming, and AI-powered workloads. Develop resilient, observable, and scalable systems using Apache Kafka, Flink, Pulsar, and cloud-native tools. Collaborate with AI/ML engineers to operationalize models and enable generative AI use cases such as prompt-based insights or automation. Deliver reliable … experience (or equivalent) with deep experience in platform/backend systems. Expert-level skills in Java, with strong proficiency in Python. Experience building distributed data pipelines using Apache Kafka, Flink, and Pulsar. Familiarity with data lakes and scalable data storage patterns. Demonstrated experience integrating with AI/ML models, including LLMs and prompt-based applications. Proven capability in fullstack More ❯