technologies to create and maintain data assets and reports for business insights. Assist in engineering and managing data models and pipelines within a cloud environment, utilizing technologies like Databricks, Spark, Delta Lake, and SQL. Contribute to the maintenance and enhancement of our progressive tech stack, which includes Python, PySpark, Logic Apps, Azure Functions, ADLS, Django, and ReactJs. Support the … environment and various platforms, including Azure, SQL Server. NoSQL databases is good to have. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with DataOps practices and tools, including CI/CD for data pipelines. Experience in medallion data architecture and other similar data modelling approaches. Experience with data More ❯
data ecosystem (e.g., Pandas, NumPy) and deep expertise in SQL for building robust data extraction, transformation, and analysis pipelines. Hands-on experience with big data processing frameworks such as ApacheSpark, Databricks, or Snowflake, with a focus on scalability and performance optimization Familiarity with graph databases (e.g., Neo4j, Memgraph) or search platforms (e.g., Elasticsearch, OpenSearch) to support complex More ❯
teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Signify Technology
data loads, and data pipeline monitoring. Develop and optimise data pipelines for integrating structured and unstructured data from various internal and external sources. Leverage big data technologies such as ApacheSpark, Kafka, and Scala to build robust and scalable data processing systems. Write clean, maintainable code in Python or Scala to support data transformation, orchestration, and integration tasks. … issues and optimise system performance. Qualifications: Proficient in handling multiple data sources and integrating data across different systems. +4 years experience as a Data engineer Hands-on expertise in Spark, Kafka, and other distributed data processing frameworks. Solid programming skills in Python Strong familiarity with cloud data ecosystems, especially AWS. Strong knowledge of DBT and Snowflake Strong problem-solving More ❯
Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at scale. Strong grasp of mathematical optimisation (e.g., linear/integer programming, meta-heuristics … Production mindset: containerise models, deploy via Airflow/ADF, monitor drift, automate retraining. Soft skills: clear comms, concise docs, and a collaborative approach with DS, Eng & Product. Bonus extras: Spark/Databricks, Kubernetes, big-data panel or ad-tech experience. More ❯
experience leading data or platform teams in a production environment Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as ApacheSpark, Kafka, Databricks, DBT or similar Familiarity with data warehousing, ETL/ELT processes, and analytics engineering Programming proficiency in Python, Scala or Java Experience operating in a More ❯
years of experience in software or data engineering, with at least 3+ years in a leadership role. Proven track record building and scaling data platforms using tools such as Spark, Kafka, Airflow, Snowflake, Databricks , or similar. Strong grasp of data architecture, ETL/ELT , data modeling, and the broader big data ecosystem. Excellent leadership, communication, and stakeholder management skills. More ❯
Snowflake platform solutions ● Strong understanding of end-to-end data architecture, including ETL/ELT, data modeling, and business intelligence tools ● Experience with SQL, Python, Java, and/or Spark in data engineering or analytics contexts ● Familiarity with large-scale data warehouse technologies (e.g., Snowflake, Teradata, Greenplum, Netezza, etc.) ● Expertise in data security design and access control, particularly within More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
Master's/PhD in Computer Science, Data Science, Mathematics, or related field. 5+ years of experience in ML modeling, ranking, or recommendation systems . Proficiency in Python, SQL, Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed … computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront of AI innovation in financial services . If you're ready to apply cutting-edge ML & GenAI techniques to complex business challenges, we'd love to hear from you! McGregor Boyall is More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Anson McCade
Looker. • Knowledge of CI/CD processes and infrastructure-as-code. • Eligible for SC clearance (active clearance highly advantageous). Desirable Skills • Exposure to large data processing frameworks (e.g., Spark, Hadoop). • Experience deploying data via APIs and microservices. • AWS certifications (Solution Architect Associate, Data Analytics Speciality, etc.). • Experience in public sector programmes or government frameworks. Package & Benefits More ❯
concepts in ML, data science and MLOps. Nice-to-Have : Built agentic workflows/LLM tool-use. Experience with MLFlow, WandB, LangFuse, or other MLOps tools. Experience with AirFlow, Spark, Kafka or similar. Why Plexe? Hard problems: we're automating the entire ML/AI lifecycle from data engineering to insights. High ownership: first 5 engineers write the culture More ❯
Substantial experience using tools for statistical modelling of large data sets Some familiarity with data workflow management tools such as Airflow as well as big data technologies such as ApacheSpark or other caching and analytics technologies Expertise in model training, Statistics, model evaluation, deployment and optimisation, including RAG-based architectures. More ❯
Building reliable, scalable, and flexible systems. Influence Opinion and decision-making across AI and ML Skills Python SQL/Pandas/Snowflake/Elasticsearch Docker/Kubernetes Airflow/Spark Familiarity with GenAI models/libraries Requirements 4+ years of relevant software engineering experience post-graduation A degree (ideally a Master’s) in Computer Science, Physics, Mathematics, or any More ❯
We are seeking a skilled Machine Learning Developer with expertise in Spark ML, predictive modeling, and deploying training and inference pipelines on distributed systems such as Hadoop. The ideal candidate will design, implement, and optimize machine learning solutions for large-scale data processing and predictive analytics. London Long Term Contract Rate - 550-600 pd Hybrid - Max Three Days in … the office Responsibilities: Develop and implement machine learning models using Spark ML for predictive analytics. Design and optimize training and inference pipelines for distributed systems (e.g., Hadoop). Process and analyze large-scale datasets to extract meaningful insights and features. Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines. Evaluate model performance and fine … and batch inference. Monitor and troubleshoot deployed models to ensure reliability and performance. Stay updated with advancements in machine learning frameworks and distributed computing technologies. Required Skills: Proficiency in ApacheSpark and Spark MLlib for machine learning tasks. Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering). Experience with distributed systems like Hadoop for data More ❯
on cutting-edge blockchain data infrastructure in a highly dynamic environment. What We're Looking For: 5+ years of engineering experience Strong expertise in data engineering: – Proficient in Python , Spark – Advanced SQL skills Proven experience with data ingestion pipelines Hands-on with cloud platforms – ideally AWS Solid experience working with Big Data and time series databases You’ll be More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Humand Talent
under pressure Nice to Have (But Not Dealbreakers) If you’ve worked with any of the following, that’s a bonus—but not a requirement: Big data tools (e.g., Spark, Databricks, Elasticsearch) Salesforce Marketing Cloud or campaign analytics Google Analytics (and ideally BigQuery) Alteryx or similar BI/automation tools A previous role in digital entertainment, sports, or consumer More ❯
Engineer £85,000 Remote (travel into London a few times a year) My client are looking for an individual that has the following (practical working experience needed): Python (PySpark) ApacheSpark Kafka (Event Driven Architecture) **Please note, my client are only looking for candidates based within the UK** Get in touch now More ❯
an excellent understanding of data analytics. The Client would also like to see experience of managing and leading a team of Data Scientists. Should have experience of SCALA/SPARK and Hadoop. Initially this is a 3 month contract assignment in Canary Wharf ?" with likelihood that it will go on beyond that point. Location is Canary Wharf London. Please More ❯
Security, Computer Science, or related fields. Possess offensive and defensive thinking with an understanding of mainstream security threats and defense strategies. Proficient in data analysis tools such as SQL, Spark, Python. Excellent project management skills with experience in large-scale internet operations, microservice architecture, big data analysis, and security compliance is a plus. More ❯