london (city of london), south east england, united kingdom
Morela Solutions
on experience or strong interest in working with Foundry as a core platform Forward-Deployed Engineering – delivering real time solutions alongside users and stakeholders Broader Skillsets of Interest: Python & PySpark – for data engineering and workflow automation Platform Engineering – building and maintaining scalable, resilient infrastructure Cloud (AWS preferred) – deploying and managing services in secure environments Security Engineering & Access Control – designing More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
with deploying services in Docker and Kubernetes Experience in creating production grade coding and SOLID programming principles, including test-driven development (TDD) approaches Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Experience in source-control software, e.g., GitHub Proficient at communicating results in a concise manner both verbally and written Experience in data and model monitoring is More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
with deploying services in Docker and Kubernetes Experience in creating production grade coding and SOLID programming principles, including test-driven development (TDD) approaches Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Experience in source-control software, e.g., GitHub Proficient at communicating results in a concise manner both verbally and written Experience in data and model monitoring is More ❯
environment. Experience in line management or team leadership, with a track record of developing and supporting engineers. Strong proficiency in: AWS (Lambda, Glue, ECS, S3, etc.). Python and PySpark (data pipelines, APIs, automation). TypeScript and React (frontend development). Excellent communication and stakeholder management skills. Demonstrated expertise in technical design and architecture of distributed systems. Familiarity with More ❯
data architecture, data modelling, and big data platforms. Proven expertise in Lakehouse Architecture, particularly with Databricks. Hands-on experience with tools such as Azure Data Factory, Unity Catalog, Synapse, PySpark, Power BI, SQL Server, Cosmos DB, and Python. In-depth knowledge of data governance frameworks and best practices. Solid understanding of cloud-native architectures and microservices in data environments. More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Littlefish
roles.? Ability to manage multiple projects and priorities in a fast-paced environment.? Experience with SSRS/SSAS (Tabular with DAX & OLAP with MDX)/SSIS? Experience with Databricks, Pyspark, and other data sciences tools Experience of using Azure DevOps/Git? Microsoft Certified on Data (e.g. Fabric) This is a client-facing role, so the following are essential More ❯
understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks. Significant proficiency in SQL and languages like Python, PySpark and/or Scala. Can lead, work independently as well as play a key role in a team. Good communication and interpersonal skills for working in a multicultural work More ❯
individuals across 100 countries and has a reach of 600 million users, is recruiting an MLOps Engineer who has Chatbot (Voice) integration project experience using Python, Pytorch, Pyspark and AWS LLM/Generative AI. Our client is paying £400 PD Outside IR 35 to start ASAP for an initial 6-month contract on a hybrid basis based near Stratford More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Sanderson
current Solution - Identify where possible and if possible, the solution works for Argos & Nectar Products. Experience Required: MTA (Multi-Touch Attribution) and MMM (Marketing Mix Modelling) Experience Python SQL Pyspark Cloud Technology - Ideally AWS or Azure Machine Learning Experience Accumetric Metrics Experience of working with multi-functional teams Sanderson is committed to barrier-free and inclusive recruitment. We are More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
Knowledge of the technical differences between different packages for some of these model types would be an advantage. Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
in building scalable data solutions that empower market readiness. 3 months initial contract Remote working ( UK based) Inside IR35 Responsibilities Design, develop, and maintain data pipelines using Palantir Foundry, PySpark, and TypeScript. Collaborate with cross-functional teams to integrate data sources and ensure data quality and consistency. Implement robust integration and unit testing strategies to validate data workflows. Engage More ❯
Spark - Must have Scala - Must Have hands on coding Hive & SQL - Must Have Note: Please screen the profile before interview. At least Candidate should know Scala coding language. Pyspark profile will not help here. Interview includes coding test. Job Description: = Scala/Spark Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL Linux More ❯
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL) A good quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science) Experience of WTW's Radar software is preferred Proficient at communicating results in a concise More ❯
Bracknell, Berkshire, United Kingdom Hybrid / WFH Options
Teksystems
Provide hands-on support for production environments, ensuring the stability and performance of data workflows. Troubleshoot and resolve issues related to data pipelines and integrations built using Palantir Foundry, PySpark, and TypeScript. Collaborate with engineering and business teams to understand requirements and deliver timely solutions. Support and improve continuous integration (CI) processes to streamline deployment and reduce downtime. Communicate More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala (minimum of 2). Extensive Data Engineering hands-on experience (coding, configuration, automation, delivery, monitoring, security). ETL Tools such as Azure Data Fabric (ADF) and Databricks or … UK, and you MUST have the Right to Work in the UK long-term without the need for Company Sponsorship. KEYWORDS Senior Data Engineer, Coding Skills, Spark, Java, Python, PySpark, Scala, ETL Tools, Azure Data Fabric (ADF), Databricks, HDFS, Hadoop, Big Data, Cloudera, Data Lakes, Azure Data, Delta Lake, Data Lake, Databricks Lakehouse, Data Analytics, SQL, Geospatial Data, FME More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
role for you. Key Responsibilities: Adapt and deploy a cutting-edge platform to meet customer needs Design scalable generative AI workflows (e.g., using Palantir) Execute complex data integrations using PySpark and similar tools Collaborate directly with clients to understand their priorities and deliver impact Why Join? Be part of a mission-driven startup redefining how industrial companies operate Work More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
Employment Type: Temporary
Salary: £80000 - £500000/annum Pension, Good Holiday, Insurances
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Central London, London, United Kingdom Hybrid / WFH Options
iDPP
someone who enjoys building scalable data solutions while staying close to business impact. The Role As a Data Analytics Engineer , youll design, build, and maintain reliable data pipelinesprimarily using PySpark, SQL, and Python to ensure business teams (analysts, product managers, finance, operations) have access to well-modeled, actionable data. Youll work closely with stakeholders to translate business needs into … spend more time coding, managing data infrastructure, and ensuring pipeline reliability. Who Were Looking For Data Analytics : Analysts who have strong experience building and maintaining data pipelines (particularly in PySpark/SQL ) and want to work on production-grade infrastructure. Data Engineering : Engineers who want to work more closely with business stakeholders and enable analytics-ready data solutions. Analytics … Professionals who already operate in this hybrid space, with proven expertise across big data environments, data modeling, and business-facing delivery. Key Skills & Experience Strong hands-on experience with PySpark, SQL, and Python Proven track record of building and maintaining data pipelines Ability to translate business requirements into robust data models and solutions Experience with data validation, quality checks More ❯
business challenges by adapting and deploying the STRATOS platform to their needs Design and implement scalable generative AI workflows using platforms like Palantir AIP Execute complex data integration using PySpark and other distributed technologies Collaborate directly with clients to understand context, priorities, and key outcome Requirements Minimum 2-4 years of solid experience in data engineering or analytics. Experience … scaling tech company, alternatively experience at a top-tier consultancy. The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines Excellent knowledge of PySpark, Python and SQL fundamentals The ability to get to grips with new technologies quickly What's Nice to Have Experience in dashboarding tools, Typescript and API development Familiarity with More ❯
with a focus on performance, scalability, and reliability. Responsibilities Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks Develop scalable ETL processes using PySpark and Python Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation Ensure data quality, governance, and performance throughout the migration lifecycle Document technical processes … and support knowledge transfer to internal teams Required Skills Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL Proven track record in delivering data migration projects within Azure environments Ability to work independently and communicate effectively with technical and non-technical stakeholders Previous experience in consultancy or client-facing roles is advantageous More ❯