slough, south east england, united kingdom Hybrid / WFH Options
Hlx Technology
data infrastructure or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hlx Technology
data infrastructure or data platforms, with proven ability to solve complex distributed systems challenges independently Expertise in large-scale data processing pipelines (batch and streaming) using technologies such as Spark, Kafka, Flink, or Beam Experience designing and implementing large-scale data storage systems (feature stores, timeseries databases, warehouses, or object stores) Strong distributed systems and infrastructure skills (Kubernetes, Terraform More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
version control, e.g., Git. Knowledge of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark, geospatial data/modelling Exposure to MLOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow, k8s, FastAPI etc. are a plus. Additional Information What’s More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize More ❯
on technically/40% hands-off leadership and strategy Proven experience designing scalable data architectures and pipelines Strong Python, SQL, and experience with tools such as Airflow, dbt, and Spark Cloud expertise (AWS preferred), with Docker/Terraform A track record of delivering in fast-paced, scale-up environments Nice to have: Experience with streaming pipelines, MLOps, or modern More ❯
Employment Type: Full-Time
Salary: £110,000 - £120,000 per annum, Inc benefits
engineering and model evaluation. Deployment experience using Docker or other containerisation tools. Exposure to GPU-based environments for large-scale model training and tuning. Experience with big data tools (Spark, Hadoop) and cloud platforms (AWS, GCP, Azure) is a plus. Strong analytical mindset with the ability to translate data into actionable insights. If you or someone you know of More ❯
to refine and monitor data collection systems using Scala and Java. Apply sound engineering principles such as test-driven development and modular design. Preferred Background Hands-on experience with Spark and Scala in commercial environments. Familiarity with Java and Python. Exposure to distributed data systems and cloud storage platforms. Experience designing data schemas and analytical databases. Use of AI More ❯
Pydantic) for document processing, summarization, and clinical Q&A systems. Develop and optimize predictive models using scikit-learn, PyTorch, TensorFlow, and XGBoost. Design robust data pipelines using tools like Spark and Kafka for real-time and batch processing. Manage ML lifecycle with tools such as Databricks , MLflow , and cloud-native platforms (Azure preferred). Collaborate with engineering teams to More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
technical background (5+ years) in building scalable data platforms Excellent communication and stakeholder management skills Hands-on experience with modern data tools and technologies — Python, SQL, Snowflake, Airflow, dbt, Spark, AWS, Terraform A collaborative mindset and a passion for mentoring and developing others Comfortable balancing technical decisions with business needs Nice to have: experience with Data Mesh, real-time More ❯
Ability to translate complex technical problems into business solutions. 🌟 It’s a Bonus If You Have: Experience in SaaS, fintech, or software product companies. Knowledge of big data frameworks (Spark, Hadoop) or cloud platforms (AWS, GCP, Azure). Experience building and deploying models into production. A strong interest in AI, automation, and software innovation. 🎁 What’s in It for More ❯
Ability to translate complex technical problems into business solutions. 🌟 It’s a Bonus If You Have: Experience in SaaS, fintech, or software product companies. Knowledge of big data frameworks (Spark, Hadoop) or cloud platforms (AWS, GCP, Azure). Experience building and deploying models into production. A strong interest in AI, automation, and software innovation. 🎁 What’s in It for More ❯
Research/Statistics or other quantitative fields. Experience in NLP, image processing and/or recommendation systems. Hands on experience in data engineering, working with big data framework like Spark/Hadoop. Experience in data science for e-commerce and/or OTA. We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance More ❯
for new and existing diseases, and a pattern of continuous learning and development is mandatory. Key Responsibilities Build data pipelines using modern data engineering tools on Google Cloud: Python, Spark, SQL, BigQuery, Cloud Storage. Ensure data pipelines meet the specific scientific needs of data consuming applications. Responsible for high quality software implementations according to best practices, including automated test More ❯
for new and existing diseases, and a pattern of continuous learning and development is mandatory. Key Responsibilities Build data pipelines using modern data engineering tools on Google Cloud: Python, Spark, SQL, BigQuery, Cloud Storage. Ensure data pipelines meet the specific scientific needs of data consuming applications. Responsible for high quality software implementations according to best practices, including automated test More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
in Python for data pipelines, transformation, and orchestration. Deep understanding of the Azure ecosystem (e.g., Data Factory, Blob Storage, Synapse, etc.) Proficiency in Databricks (or strong equivalent experience with ApacheSpark ). Proven ability to work within enterprise-level environments with a focus on clean, scalable, and secure data solutions. If you are the right fit - contact me More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
london (city of london), south east england, united kingdom
HCLTech
using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
london (city of london), south east england, united kingdom
BettingJobs
modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
and listed on the London Stock Exchange. With 3,000 employees and 32 offices in 12 countries we're a business with lots of opportunity for people with talent, spark and lots of ambition. If you want to build a great career with a company that prioritises strong values - such as integrity and courage - where our people always pull More ❯