collaboration, and high performance. Hold team members accountable for their performance and professional growth. Technical Expertise: Bring a deep understanding of data technologies, including batchprocessing, stream processing, storage, analytics, machine learning, and AI. Collaborate with Engineering teams to ensure the data platform meets technical requirements and … data-focused environment. Strong technical background and a degree in Computer Science, Engineering, or a related field. Hands-on experience with data technologies, including batchprocessing, stream processing, data warehousing, and analytics platforms. Proven track record of successfully leading product strategies that drive significant business outcomes. Experience … experience in mentoring and developing Product Managers. Ability to inspire and lead cross-functional teams toward common goals. Deep understanding of data architectures, data processing frameworks, and data modelling. Familiarity with technologies such as AWS and other cloud-based data services. Knowledge of data privacy laws and compliance requirements More ❯
of-concept to beta-product phase. You will have the opportunity to learn new techniques and algorithms at the cutting edge of Natural Language Processing (NLP), specifically in compensating for Large Language Model limitations. You will also have the opportunity to innovate and contribute to algorithm development. Dr G.A.McHale … team is led by someone with significant AI experience in bio-inspired architectures, reinforcement learning, expert systems, scheduling, meta-heuristics, robotics and natural language processing (including LLMs). We have recruited an experienced scientific computing developer with a strong mathematics background in theoretical physics, whose responsibilities relate to distributed … work includes hybrid-LLM system optimisation, and possible pre-filtering algorithms for reduced computational loads. Success amounts to significant improvements in the costs of batchprocessing and inference costs. About You This is a hands-on programming role. It is essential that you have some expertise in LLMs More ❯
Watford, Hertfordshire, United Kingdom Hybrid / WFH Options
Digital Gaming Corp
data from sources like Facebook, Google Analytics, and payment providers. Using tools like AWS Redshift, S3, and Kafka, you'll optimize data models for batch and real-time processing. Collaborating with stakeholders, you'll deliver actionable insights on player behavior and gaming analytics, enhancing experiences and driving revenue with … robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Smart DCC
the data environment. What will you be doing? Design and implement efficient ETL processes for data extraction, transformation, and loading. Build real-time data processing pipelines using platforms like Apache Kafka or cloud-native tools. Optimize batchprocessing workflows with tools like Apache Spark and Flink for … What are we looking for? Advanced proficiency with databases (SQL Server, Oracle, MySQL, PostgreSQL). Expertise in building and managing data pipelines and data processing workflows. Strong understanding of data warehousing concepts, schema design, and data modelling. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) for scalable More ❯
systems development e.g. in a hedge fund or investment bank Expertise in building distributed systems with large data warehouses and both on-line and batchprocessing Experience of web-based development and visualisation technology for portraying large and complex data sets and relationships Substantial quant development engineering experience More ❯
for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming. Strong knowledge of multi-threading and high-volume batch processing. Proficiency in performance tuning for Java/Python and Spark. Deep knowledge of AWS products/services and Kubernetes/container technologies and More ❯
the LLM and MLOps space Main Purpose of Role LLM/NLP Production Engineering Build and maintain scalable, production-ready pipelines for Natural Language Processing and Large Language Model (LLM) features. Package and deploy inference services for ML models and prompt-based LLM workflows using containerised services. Ensure reliable … model integration across real-time APIs and batchprocessing systems. Pipeline Automation & MLOps Use Apache Airflow (or similar) to orchestrate ETL and ML workflows. Leverage MLflow or other MLOps tools to manage model lifecycle tracking, reproducibility, and deployment. Create and manage robust CI/CD pipelines tailored for More ❯
robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
london, south east england, united kingdom Hybrid / WFH Options
Intec Select
robust ETL pipelines to integrate data from diverse sources, including APIs like Facebook, Google Analytics, and payment providers. Develop and optimize data models for batchprocessing and real-time streaming using tools like AWS Redshift, S3, and Kafka. Lead efforts in acquiring, storing, processing, and provisioning data More ❯
in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL/Databases Strong understanding of Messaging tech like Kafka, Solace , MQ etc. Writing production scale applications to use More ❯
Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batchprocessing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of More ❯
Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batchprocessing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of More ❯
business area. Comfortable making presentations covering business, technical or sales. Detailed knowledge of; Project phasing and reporting On-site delivery Systems support Operational procedures Batchprocessing System sizing and capacity planning System backup, contingency and disaster recovery Regional roll outs Education & Preferred Qualifications Third level qualification essential ideally More ❯
in a BI leadership role in a global or matrixed organisation. Proven expertise in modern BI architecture (Data Lake, EDW, Streaming, APIs, Real-Time & BatchProcessing). Demonstrated experience delivering cloud-based analytics platforms (Azure, AWS, GCP). Strong knowledge of data governance, cataloguing, security, automation, and self More ❯
SQL (Postgres, SQL Server, Databricks), Linux (via WSL), Bash AWS & Azure, Infrastructure as Code Large-scale structured & unstructured cyber risk data, Real Time and batchprocessing Agile, CI/CD, test automation, pairing culture Strong experience as a senior or lead software engineer in a data-rich environment. More ❯
and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batchprocessing/ETL using Glue Jobs, AWS Lambda, and Step functions. Experience with designing bespoke & tailored front-end solutions (GUI based) using open More ❯
London Based Investment Bank You will: Design and build foundation components that will underpin our data mesh ecosystem. Build enterprise class real-time and batch solutions that support mission critical processes. Build solutions in line with our Digital Principles. Partner with our Product team(s) to create sustainable and … hands-on engineering in large scale complex Enterprise(s), ideally in the banking/financial industry. Worked with modern tech - data streaming, real-time & batchprocessing and compute clusters. Working knowledge of relational and NoSQL databases, designing and implementing scalable solutions. Experience of working in continuous architecture environment More ❯
modelling for trading and financial systems – from market data to trade flows and risk metrics Build and optimise pipelines for near real-time and batchprocessing using Databricks and related tech Define and implement data standards, lineage, governance, and quality metrics across systems Collaborate closely with tech, data More ❯
modelling for trading and financial systems – from market data to trade flows and risk metrics Build and optimise pipelines for near real-time and batchprocessing using Databricks and related tech Define and implement data standards, lineage, governance, and quality metrics across systems Collaborate closely with tech, data More ❯
projects, topical workshops, and lead implementation projects. These professional services engagements will focus on key customer solutions such as web applications, enterprise applications, HPC, batchprocessing, big data, archiving, disaster recovery, education, and government. Professional Services engage in a wide variety of projects for customers and partners, providing More ❯
and compliance with enterprise architecture standards. Key Responsibilities: Design integration architecture between MSS 6.0 and all upstream/downstream systems, including real-time APIs, batch processes, and middleware (e.g., Mulesoft, JBOSS, MQ). Define interface specifications , data contracts , and error handling strategies for all key integration touchpoints. Ensure alignment More ❯
and compliance with enterprise architecture standards. Key Responsibilities: Design integration architecture between MSS 6.0 and all upstream/downstream systems, including real-time APIs, batch processes, and middleware (e.g., Mulesoft, JBOSS, MQ). Define interface specifications , data contracts , and error handling strategies for all key integration touchpoints. Ensure alignment More ❯
/knowledge/experience: Experience in Snowflake, DBT, Glue, Airflow. Proficiency in architecture and building solutions to support highly available and scalable web sites, batch processes, services and API’s. Understanding of IAM policies and statements. Experience of writing and using terraform or AWS CDK at scale. Postgres SQL More ❯
/knowledge/experience: Experience in Snowflake, DBT, Glue, Airflow. Proficiency in architecture and building solutions to support highly available and scalable web sites, batch processes, services and API’s. Understanding of IAM policies and statements. Experience of writing and using terraform or AWS CDK at scale. Postgres SQL More ❯