Nursling, Southampton, Hampshire, England, United Kingdom Hybrid / WFH Options
Ordnance Survey
lead Support the Ordnance Survey Testing Community, with common standards such as metrics and use of test tools Here is a snapshot of the technologies that we use Scala, ApacheSpark, Databricks, Apache Parquet, YAML, Azure Cloud Platform, Azure DevOps (Test plans, Backlogs, Pipelines), GIT, GeoJSON What we're looking for Highly skilled in creating, maintaining and More ❯
in data engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands More ❯
South East London, London, United Kingdom Hybrid / WFH Options
TEN10 SOLUTIONS LIMITED
Driven Projects: Collaborate on exciting and complex data pipelines, platforms, and automation solutions across industries including finance, retail, and government. Cutting-Edge Tech: Work hands-on with tools like Spark, Databricks, Azure Deequ, and more. Career Development: We invest in your growth with dedicated training, mentoring, and support for certifications. Collaborative Culture: Join a diverse, inclusive team that thrives … you'll do: Maintain test automation frameworks tailored for data-intensive environments. Implement validation tests for data pipelines, data quality, and data transformation logic. Use tools like Azure Deequ , Spark , and Databricks to ensure data accuracy and completeness. Write robust, scalable test scripts in Scala , Python , and Java . Integrate testing into CI/CD pipelines and support infrastructure … and data validation techniques. Experience using test automation frameworks for data pipelines and ETL workflows Strong communication and stakeholder management skills. Nice-to-Have: Hands-on experience with Databricks , ApacheSpark , and Azure Deequ . Familiarity with Big Data tools and distributed data processing. Experience with data observability and data quality monitoring. Proficiency with CI/CD tools More ❯
in Computer Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., ApacheSpark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB More ❯
Catalog). Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (ApacheSpark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration More ❯
Agile projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience More ❯
ve built scalable backend systems and APIs (RESTful/GraphQL), with solid experience in microservices and databases (SQL/NoSQL). You know your way around big data tools (Spark, Dask) and orchestration (Airflow, DBT). You understand NLP and have experience working with Large Language Models. You're cloud-savvy (AWS, GCP, or Azure) and comfortable with containerization More ❯
Reading, Oxfordshire, United Kingdom Hybrid / WFH Options
Henderson Drake
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Henderson Drake
e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or professional More ❯
management and monitoring. Hands-on experience with AWS Have a good grasp of IaC (Infrastructure-as-code) tools like Terraform and CloudFormation. Previous exposure to additional technologies like Python, Spark, Docker, Kubernetes is desirable. Ability to develop across a diverse technology stack and willingness and ability to take on new technologies. Demonstrated experience participating on cross functional teams in More ❯
Bexhill-on-sea, Sussex, United Kingdom Hybrid / WFH Options
Hastings Direct
drive projects from initiation to completion. Keen interest in emerging Machine Learning techniques. Desirable: Experience with Cloud deployments (e.g. Azure/AWS/GCP) and data processing frameworks (e.g. ApacheSpark). The interview process: Our interview process involves the below: Recruiter screening call 1st stage interview - Intro with Hiring Leader 2nd interview - case Study round with hiring More ❯
looking for a Senior Data Scientist, to help to drive innovation and the usage of machine learning technology to support key business use cases, leveraging the Azure platform, Python, Spark, GenAI/LLMs, Databricks, SQL, DevOps & geospatial data knowledge to create end to end products to support key products. Ideally, you'll have experience from within the energy industry. More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
time, taking project briefs and refining them to strong results. Exposure to Python data science stack Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark and geospatial data/modelling are a plus. Interview process: Subject to change 1) 15minute conversation with Talent Team 2) 30minute interview with hiring manager 3) You'll be More ❯
management and monitoring. Hands-on experience with AWS Have a good grasp of IaC (Infrastructure-as-code) tools like Terraform and CloudFormation. Previous exposure to additional technologies like Python, Spark, Docker, Kubernetes is desirable. Ability to develop across a diverse technology stack and willingness and ability to take on new technologies. Demonstrated experience participating on cross functional teams in More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Method Resourcing
a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous More ❯
Pfizer Ltd, Walton Oaks, Dorking Road, Tadworth, Surrey, England
Cogent Ssc Limited
to apply for other positions within the business. Integrated Insights & Strategy Manager role Data Science Manager role Apprenticeship Standard Artificial intelligence (AI) data specialist (level 7) Training Provider CAMBRIDGE SPARK LIMITED Working Week Monday to Thursday, 9am - 5.25pm with 45-minute lunch break. Fridays, 9am - 4.05pm with 45-minute lunch break Expected Duration 1 Year 9 Months Positions Available More ❯
Terraform. Experience with observability stacks (Grafana, Prometheus, OpenTelemetry). Familiarity with Postgres. Interest in data-privacy, AdTech/MarTech or large-scale data processing. Familiarity with Kafka, gRPC or Apache Spark. As well as working as part of an amazing, engaging and collaborative team, we offer our staff a wide range of benefits to motivate them to be the More ❯
equivalent performance management tools. Experience with one or more of the following data tools: Tableau, Foresight, GCP or SQL The other stuff we are looking for Shell scripting, Python, Spark, HIVE, NiFi, Hortonworks/Cloudera DataFlow, HDFS. What's in it for you Our goal is to celebrate our people, their lives and everything in-between. We aim to More ❯
equivalent performance management tools. Experience with one or more of the following data tools: Tableau, Foresight, GCP or SQL The other stuff we are looking for Shell scripting, Python, Spark, HIVE, NiFi, Hortonworks/Cloudera DataFlow, HDFS. What's in it for you Our goal is to celebrate our people, their lives and everything in-between. We aim to More ❯
join on a contract basis to support major digital transformation projects with Tier 1 banks. You'll help design and build scalable, cloud-based data solutions using Databricks , Python , Spark , and Kafka -working on both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and More ❯
East Horsley, Surrey, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet the … Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
ability to explain technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant … you MUST have the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare
. Manage and monitor the cost, efficiency, and speed of data processing. Our Data Tech Stack Azure Cloud (SQL Server, Databricks, Cosmos DB, Blob Storage) ETL/ELT (Python, Spark, SQL) Messaging (Service Bus, Event Hub) DevOps (Azure DevOps, Github Actions, Terraform) Who you are A driven, ambitious individual who’s looking to build their career at an exciting … building and maintaining robust and scalable data pipelines Proficiency in ELT and ETL processes and tools Ability to write efficient code for data extraction, transformation, and loading (eg. Python, Spark and SQL) Proficiency with cloud platforms (particularly Azure Databricks and SQL Server) Ability to work independently Ability to communicate complex technical concepts clearly to both technical and non-technical More ❯
cross-functional teams to manage complex ETL processes, implement best practices in code management, and ensure seamless data flow across platforms. Projects may include connecting SharePoint to Databricks, optimising Spark jobs, and managing GitHub-based code promotion workflows. This is a hybrid role based in London, with 1-2 days per week in the office. What You'll Need … to Succeed You'll bring 5+ years of data engineering experience, with expert-level skills in Python and/or Scala, SQL, and Apache Spark. You're highly proficient with Databricks and Databricks Asset Bundles, and have a strong understanding of data transformation best practices. Experience with GitHub, DBT, and handling large structured and unstructured datasets is essential. You More ❯