teams as part of a wider trading project. The initial work on the project will involve abstracting code from these product teams into a shared, common python library leveraging PySpark/dataframes. You will then be serving as an extension of these product teams building microservices and libraries to solve the common needs. Skills: • Experience with Unit Testing • Preferably More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Vermelo RPO
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
Manchester, North West, United Kingdom Hybrid / WFH Options
Gerrard White
ideally with a focus in Motor Experience and detailed technical knowledge of GLMs/Elastic Nets, GBMs, GAMs, Random Forests, and clustering techniques Experience in programming languages (e.g. Python, PySpark, R, SAS, SQL) Proficient at communicating results in a concise manner both verbally and written Behaviours: Motivated by technical excellence Team player Self-motivated with a drive to learn More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
Proven experience as a Programme or Delivery Manager on data-centric programmes Solid understanding of data ingestion processes and Snowflake data warehousing Familiarity with AWS Glue, S3, DBT, SnapLogic, PySpark (not hands-on, but able to converse technically) Strong governance and delivery background in a data/tech environment Excellent communication and stakeholder management skills (must be assertive) Pharma More ❯
across various platforms. This position is essential for ensuring the integrity, reliability, and accessibility of our data, which supports critical business decisions and drives insights. **Required Skills** - **Proficiency in PySpark and AWS:** You should have a strong command of both PySpark for data processing and AWS (Amazon Web Services) for cloud-based solutions. - **ETL Pipeline Development:** Demonstrated experience … ETL (Extract, Transform, Load) pipelines is crucial. You will be responsible for moving and transforming data from various sources to data warehouses. - **Programming Expertise:** A solid understanding of Python, PySpark, and SQL is required to manipulate and analyze data efficiently. - **Knowledge of Spark and Airflow:** In-depth knowledge of Apache Spark for big data processing and Apache Airflow for More ❯
Birmingham, West Midlands, England, United Kingdom Hybrid / WFH Options
Client Server Ltd
Data Software Engineer (PythonPySpark) Remote UK to £95k Are you a data savvy Software Engineer with strong Python coding skills? You could be progressing your career in a senior, hands-on Data Software Engineer role as part of a friendly and supportive international team at a growing and hugely successful European car insurance tech company as they expand … on your location/preferences. About you: You are degree educated in a relevant discipline, e.g. Computer Science, Mathematics You have a software engineering background with advanced Python and PySpark coding skills You have experience in batch, distributed data processing and near real-time streaming data pipelines with technologies such as Kafka You have experience of Big Data Analytics More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Involved Solutions
Key Responsibilities - Azure Data Engineer: Design, build and maintain scalable and secure data pipelines on the Azure platform. Develop and deploy data ingestion processes using Azure Data Factory, Databricks (PySpark), and Azure Synapse Analytics. Optimise ETL/ELT processes to improve performance, reliability and efficiency. Integrate multiple data sources including Azure Data Lake (Gen2), SQL-based systems and APIs. … GDPR and ISO standards). Required Skills & Experience - Azure Data Engineer: Proven commercial experience as a Data Engineer delivering enterprise-scale solutions in Azure Azure Data Factory Azure Databricks (PySpark) Azure Synapse Analytics Azure Data Lake Storage (Gen2) SQL & Python Understanding of CI/CD in a data environment, ideally with tools like Azure DevOps. Experience working within consultancy More ❯
In details, the position encompasses duties and responsibilities as follows: An experienced Data Engineer is required for the Surveillance IT team to develop ingestion pipelines and frameworks across the application portfolio, supporting Trade Surveillance analysts with strategy and decision-making. More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure … the Right to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, Cloud, On-Prem, ETL, Azure Data Fabric, ADF, Hadoop , HDFS , Azure Data, Delta Lake, Data Lake Please note that due to a high level More ❯
Employment Type: Permanent
Salary: £75000 - £80000/annum Pension, Good Holiday, Healthcare
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
technical concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2) Extensive Data Engineering and Data Analytics hands-on experience Significant AWS hands-on experience Technical Delivery Manager skills Geospatial Data experience (including QGIS … support your well-being and career growth. KEYWORDS Principal Geospatial Data Engineer, Geospatial, GIS, QGIS, FME, AWS, On-Prem Services, Software Engineering, Data Engineering, Data Analytics, Spark, Java, Python, PySpark, Scala, ETL Tools, AWS Glue. Please note, to be considered for this role you MUST reside/live in the UK, and you MUST have the Right to Work More ❯
The Software Engineer will run build and work on enterprise grade software systems using a modern tech stack including PySpark with Databricks for data engineering tasks, infrastructure as code with AWS CDK and GraphQL. As a Software Engineer, you are expected to work with architects to design clean decoupled solutions; create automated tests in support of continuous delivery; adopt … scientific degree or equivalent professional experience. Some level of professional working experience. More if no relevant degree. OO and functional programming experience, design patterns, SOLID principles. Experience in Python, PySpark and/or SQL is preferred. Experience with scrum, TDD, BDD, Pairing, Pull Requests, Continuous Integration & Delivery. Continuous Integration tools - Github, Azure DevOps, Jenkins or similar. Infrastructure as code More ❯
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our … modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No … Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Consultants (Octad Ltd )
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS ? PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Ltd
and clients Required Skills & Experience Must-Haves: 3+ years of hands-on Azure engineering experience (IaaS PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI … their Azure data lake using Synapse , Fabric , or an alternative strategy. Ingest data from core platforms: NetSuite , HubSpot , and client RFP datasets. Automate data pipelines using ADF , Fabric Dataflows , PySpark , or SQL . Publish governed datasets with Power BI , enabling row-level security (RLS). By Year-End: Deliver a production-ready lakehouse powering BI and ready for AI More ❯
Data Engineer - AWS, Databricks & Pyspark Contract Role - Data Engineer Location: Hybrid (1 day per month onsite in Harrow, London) Rate: £350 per day (Outside IR35) Duration: 6 months A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS … with analysts, data scientists, and business stakeholders to deliver clean, usable datasets - Contribute to good data governance, CI/CD workflows, and engineering standards - Continue developing your skills in PySpark, Databricks, and AWS-based tools Tech Stack Includes: - Databricks (Delta Lake, PySpark) - AWS - CI/CD tooling (Git, DevOps pipeline - Cloud-based data warehousing and analytics tools If … your a mid to snr level Data Engineer feel free to apply or send your C.V Data Engineer - AWS, Databricks & PysparkMore ❯
Data Engineer (Databricks) - Leeds (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for … Data Engineers with significant Databricks experience to join an exceptional Agile engineering team. We are seeking a Data Engineer with strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is … top performers. Location: Leeds Salary: £40k - £50k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI/CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
Azure Data Engineer - 1/2 days onsite Summary: Join a team building a modern Azure-based data platform. This hands-on engineering role involves designing and developing scalable, automated data pipelines using tools like Data Factory, Databricks, Synapse, and More ❯
make your mark in global asset management and make an impact on the world's investors What you will be responsible for Design, develop and maintain applications leveraging Python, PySpark, and SQL leveraging cloud native architecture principles and technologies. Lead the design and development of scalable and robust software application architecture, ensuring alignment with business goals and industry best … tools and technologies to improve our technology stack. Develop oneself into a Subject Matter Expert (SME) on Technical and Functional domain areas. What we value Demonstrated experience in Python, PySpark and SQL (AWS Redshift, Postgres, Oracle). Demonstrated experience building data pipelines with PySpark and AWS. Application development experience in financial services with hands on designing, developing, and More ❯
Oracle and integrated with Databricks Spark on AWS. Write efficient, production-quality SQL and PL/SQL queries for data extraction and transformation in Oracle. Leverage Databricks Spark and PySpark to process large datasets and build machine learning models in a distributed environment. Collaborate closed with business stakeholders to understand data requirements and translate them into technical solutions. Ensure … data security). Working knowledge of AWS core services, including S3, EC2/EMR, IAM, Athena, Glue or Redshift. Hands-on experience with Databricks Spark on large datasets, using PySpark, Scala, or SQL. Familiarity with Delta Lake, Unity Catalog or similar data lakehouse technologies. Proficient in Linux environments, including experience with shell scripting, basic system operations, and navigating file More ❯
real urgency, and real interest in doing this properly - not endless meetings and PowerPoints. What you'll be doing: Designing, building, and optimising Azure-based data pipelines using Databricks, PySpark, ADF, and Delta Lake Implementing a medallion architecture - from raw to curated Collaborating with analysts to make data business-ready Applying CI/CD and DevOps best practices (Git … time logistics datasets What they're looking for: A strong communicator - someone who can build relationships and help connect silos Experience building pipelines in Azure using Databricks, ADF, and PySpark Strong SQL and Python skills Bonus points if you've worked with Power BI, Azure Purview, or streaming tools You're versatile - happy to support analysts and wear multiple More ❯
time left to apply End Date: August 3, 2025 (30+ days left to apply) job requisition id R094691 Media help partners understand the changing advertising landscape. Specialising in audience measurement, consumer targeting and in-depth intelligence into paid, owned and More ❯
our machine learning and analytics workloads to support the companies growth. Our data stack: We work with a modern data stack built on Databricks and AWS with python and pyspark as our primary tools. In this role, you'll get to: Own business critical components and perform meaningful work with an impact on our company and our customers Design … expand your skillset About you We believe that no one is the finished article, however, some experience in the following is important for this role: Proficient with Python and PySpark Experience working with a modern data stack is beneficial but not required Experience with AWS is beneficial but not required You enjoy learning new technologies and are passionate about More ❯
our machine learning and analytics workloads to support the companies growth. Our data stack: We work with a modern data stack built on Databricks and AWS with python and pyspark as our primary tools. In this role, you'll get to: Own business critical components and perform meaningful work with an impact on our company and our customers Design … expand your skillset About you We believe that no one is the finished article, however, some experience in the following is important for this role: Proficient with Python and PySpark Experience working with a modern data stack is beneficial but not required Experience with AWS is beneficial but not required You enjoy learning new technologies and are passionate about More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineering Manager, you will … Project lead and other team members to provide regular progress updates and raise any risk/concerns/issues Qualification Core skills we're working with include : Palantir PythonPySpark/PySQL AWS or GCP What's in it for you At Accenture in addition to a competitive basic salary, you will also have an extensive benefits package which More ❯