collaboration, and high performance. Hold team members accountable for their performance and professional growth. Technical Expertise: Bring a deep understanding of data technologies, including batchprocessing, stream processing, storage, analytics, machine learning, and AI. Collaborate with Engineering teams to ensure the data platform meets technical requirements and … data-focused environment. Strong technical background and a degree in Computer Science, Engineering, or a related field. Hands-on experience with data technologies, including batchprocessing, stream processing, data warehousing, and analytics platforms. Proven track record of successfully leading product strategies that drive significant business outcomes. Experience … experience in mentoring and developing Product Managers. Ability to inspire and lead cross-functional teams toward common goals. Deep understanding of data architectures, data processing frameworks, and data modelling. Familiarity with technologies such as AWS and other cloud-based data services. Knowledge of data privacy laws and compliance requirements More ❯
built with .NET (C#) on the backend and React on the frontend, hosted in AWS and handling workflows that include external integrations, data pipelines, batchprocessing, and partner APIs. We're transitioning from a fully offshore model to a hybrid delivery team. As part of that, we're … What You'll Be Doing Developing backend services in .NET (C#) for a live, production web application Designing and maintaining APIs, business logic, and batch processes Supporting internal data pipelines and working with third-party data sources Writing clean, well-tested code and participating in structured peer reviews Working … experience delivering production systems using .NET Core Solid SQL skills and a strong grasp of API design and back-end architecture Experience with data processing, batch jobs, or similar workload-heavy systems Comfortable working independently in a remote-first environment Experience working with distributed or offshore teams Familiar More ❯
Watford, Hertfordshire, United Kingdom Hybrid / WFH Options
Digital Gaming Corp
data from sources like Facebook, Google Analytics, and payment providers. Using tools like AWS Redshift, S3, and Kafka, you'll optimize data models for batch and real-time processing. Collaborating with stakeholders, you'll deliver actionable insights on player behavior and gaming analytics, enhancing experiences and driving revenue with … robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
the data environment. What will you be doing? Design and implement efficient ETL processes for data extraction, transformation, and loading. Build real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batchprocessing workflows with tools like Apache Spark and Flink for … What are we looking for? Advanced proficiency with databases (SQL Server, Oracle, MySQL, PostgreSQL). Expertise in building and managing data pipelines and data processing workflows. Strong understanding of data warehousing concepts, schema design, and data modelling. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) for scalable More ❯
and implement streaming data pipelines using AWS EMR and PySpark to generate real-time (fast-moving) features for the feature store. Develop and maintain batchprocessing pipelines using DBT and BigQuery to generate batch (slow-moving) features, ensuring data quality, consistency and reliability. Work with Feast feature … recruiting and related purposes. Our Privacy Notice explains what personal information we will process, where we will process your personal information, its purposes for processing your personal information, and the rights you can exercise over our use of your personal information. More ❯
the LLM and MLOps space Main Purpose of Role LLM/NLP Production Engineering Build and maintain scalable, production-ready pipelines for Natural Language Processing and Large Language Model (LLM) features. Package and deploy inference services for ML models and prompt-based LLM workflows using containerised services. Ensure reliable … model integration across real-time APIs and batchprocessing systems. Pipeline Automation & MLOps Use Apache Airflow (or similar) to orchestrate ETL and ML workflows. Leverage MLflow or other MLOps tools to manage model lifecycle tracking, reproducibility, and deployment. Create and manage robust CI/CD pipelines tailored for More ❯
robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Intec Select
robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
business area. Comfortable making presentations covering business, technical or sales. Detailed knowledge of; Project phasing and reporting On-site delivery Systems support Operational procedures Batchprocessing System sizing and capacity planning System backup, contingency and disaster recovery Regional roll outs Education & Preferred Qualifications Third level qualification essential ideally More ❯
for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming. Strong knowledge of multi-threading and high-volume batch processing. Proficiency in performance tuning for Java/Python and Spark. Deep knowledge of AWS products/services and Kubernetes/container technologies and More ❯
business area. Comfortable making presentations covering business, technical or sales. Detailed knowledge of; Project phasing and reporting. On-site delivery. Systems support. Operational procedures. Batch processing. System sizing and capacity planning. System backup, contingency and disaster recovery. Regional roll outs. Education & Preferred Qualifications Third level qualification essential ideally a More ❯
chunks and deliver them with high quality. Key Responsibilities Distributed Systems Development : Design and build scalable distributed systems using Java-based microservices and Python batchprocessing to support our ML models, evaluation, and observability. Model Lifecycle : Create and maintain robust model deployment pipelines using PySpark and Databricks, ensuring … team members. Complementary skills for this role Technical Expertise : Extensive experience with distributed systems engineering, including designing and implementing Java-based microservices and Python batch jobs. Observability Knowledge : Deep understanding of observability principles, including monitoring, logging, and real-time system insights Data Engineering Skills : Proficiency in building data pipelines More ❯
to support LLM-based querying, semantic search, and metadata retrieval. Integrate structured (SQL-based) and unstructured (documents, reports) data sources for real-time and batch processing. Maintain and troubleshoot Airflow pipelines for embedding extraction and document processing. Ensure data governance, security, and compliance across all applications. Manage Vector Database More ❯
in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL/Databases Strong understanding of Messaging tech like Kafka, Solace , MQ etc. Writing production scale applications to use More ❯
Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batchprocessing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of More ❯
newcastle-upon-tyne, tyne and wear, north east england, United Kingdom
Apexon
Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batchprocessing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of More ❯
in a BI leadership role in a global or matrixed organisation. Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & BatchProcessing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance, cataloguing, security, automation, and self More ❯
the machine learning lifecycle is an advantage. Key Responsibilities Distributed Systems Development : Design and build scalable distributed systems using Java-based microservices and Python batchprocessing to support AI guardrails, evaluation, and observability. Data Pipelines : Create and maintain robust data pipelines using PySpark and Databricks, ensuring efficient and … growth among team members. What You Bring Technical Expertise : Extensive experience with distributed systems engineering, including designing and implementing Java-based microservices and Python batch jobs. Data Engineering Skills : Proficiency in building data pipelines using PySpark and Databricks, with a strong understanding of data flow and processing. Cloud Vendor More ❯
SQL (Postgres, SQL Server, Databricks), Linux (via WSL), Bash AWS & Azure, Infrastructure as Code Large-scale structured & unstructured cyber risk data, Real Time and batchprocessing Agile, CI/CD, test automation, pairing culture Strong experience as a senior or lead software engineer in a data-rich environment. More ❯
and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batchprocessing/ETL using Glue Jobs, AWS Lambda, and Step functions. Experience with designing bespoke & tailored front-end solutions (GUI based) using open More ❯
City, Edinburgh, United Kingdom Hybrid / WFH Options
ENGINEERINGUK
and several other benefits. The individual selected for this position will have the responsibility to cover business-critical compute workloads, real-time/interact processing, data transfer services, application and new technology onboarding, upgrades, and recovery procedures. The international team is split into 4 global regions to provide … implementation and weekend checkouts; aid in incident management and root cause analysis. Provide ongoing operational support for the Aladdin infrastructure. Support and fix both batchprocessing and interactive user applications to ensure the high availability of the Aladdin Environment. Use various tools to conduct analysis on system performance More ❯
built with .NET (C#) on the backend and React on the frontend, hosted in AWS and handling workflows that include external integrations, data pipelines, batchprocessing, and partner APIs. We're transitioning from a fully offshore model to a hybrid delivery team. As part of that, we're More ❯
London Based Investment Bank You will: Design and build foundation components that will underpin our data mesh ecosystem. Build enterprise class real-time and batch solutions that support mission critical processes. Build solutions in line with our Digital Principles. Partner with our Product team(s) to create sustainable and … hands-on engineering in large scale complex Enterprise(s), ideally in the banking/financial industry. Worked with modern tech - data streaming, real-time & batchprocessing and compute clusters. Working knowledge of relational and NoSQL databases, designing and implementing scalable solutions. Experience of working in continuous architecture environment More ❯
and compliance with enterprise architecture standards. Key Responsibilities: Design integration architecture between MSS 6.0 and all upstream/downstream systems, including real-time APIs, batch processes, and middleware (e.g., Mulesoft, JBOSS, MQ). Define interface specifications , data contracts , and error handling strategies for all key integration touchpoints. Ensure alignment More ❯
/knowledge/experience: Experience in Snowflake, DBT, Glue, Airflow. Proficiency in architecture and building solutions to support highly available and scalable web sites, batch processes, services and API’s. Understanding of IAM policies and statements. Experience of writing and using terraform or AWS CDK at scale. Postgres SQL More ❯