London, South East, England, United Kingdom Hybrid/Remote Options
Hays Specialist Recruitment Limited
as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage More ❯
United Kingdom, Wolstanton, Staffordshire Hybrid/Remote Options
Uniting Ambition
talent in this space. The role The role is building AI applications based on LLM and models such as GPT and BERT You'll make use of Python programming, Pyspark, tensorflow, HuggingFace, LangChain, RAG techniques, interfacing with diverse data sets. Cloud data platforms and a diverse set of tools for AI app deployment. The opportunity Work at the forefront More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
a blend of the following: Strong knowledge of AWS data services (Glue, S3, Lambda, Redshift, etc.) Solid understanding of ETL processes and data pipeline management Proficiency in Python and PySpark Experience working with SQL-based platforms Previous involvement in migrating on-premise solutions to cloud is highly desirable Excellent collaboration skills and ability to mentor others The Benefits: Salary More ❯
high-impact systems. Line management or mentoring experience, with a genuine commitment to team growth and wellbeing. Strong hands-on skills in: AWS (or equivalent cloud platforms) Python/PySpark for data engineering and automation TypeScript, Node.js, React.js for full-stack development Solid grasp of distributed systems design, secure coding, and data privacy principles. Familiarity with fraud detection models More ❯
Stevenage, Hertfordshire, England, United Kingdom Hybrid/Remote Options
Akkodis
and NoSQL to AWS cloud. Strong knowledge of ETL processes is essential, including experience with tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages like Python, PySpark, and SQL. A solid understanding of data warehousing and modelling techniques, including Star and Snowflake schemas, is required. Ideally you will also have a comprehensive knowledge of AWS glue. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Step 2 Recruitment LTD
PowerPoint presentations/reports and presenting to clients or colleagues Industry experience in the retail banking or wider financial services sector Additional technical experience in any of the following – PySpark, Microsoft Azure, VBA, HTML/CSS, JavaScript, JQuery, SQL, PHP, Power Automate, Power BI What we offer A highly competitive salary A genuinely compelling profit share scheme, with the More ❯
positive change through data It would be great if you had: Experience in the energy or retail sector Background in pricing, commercial modelling, credit risk, or debt Exposure to PySpark or other big data tools Experience with NLP, Generative AI, or advanced predictive modelling More ❯
Atherstone, Warwickshire, England, United Kingdom Hybrid/Remote Options
Big Red Recruitment
Leading the design and implementation of a new Databricks-based data warehousing solution Designing and developing data models, ETL pipelines, and data integration processes Large scale data processing using PySpark Monitoring, tuning, and optimising data platforms for reliability and performance Upskilling the wider team in Databricks best practices, including modern architecture patterns Location: Atherstone (Hybrid – 3 days office/ More ❯
Warwickshire, United Kingdom Hybrid/Remote Options
Big Red Recruitment Midlands Limited
Leading the design and implementation of a new Databricks-based data warehousing solution Designing and developing data models, ETL pipelines, and data integration processes Large scale data processing using PySpark Monitoring, tuning, and optimising data platforms for reliability and performance Upskilling the wider team in Databricks best practices, including modern architecture patterns Location: Atherstone (Hybrid 3 days office/ More ❯
Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks . Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with A More ❯
WC2H 0AA, Leicester Square, Greater London, United Kingdom Hybrid/Remote Options
Youngs Employment Services
from a "fail fast" approach to a more stable and controlled iteration management process. To be considered for the post you'll need all the essential criteria Essential SQL Pyspark/Python >6 months of practical Fabric experience in an Enterprise setting Power BI/Fabric Semantic Models Ability to work with/alongside stakeholders with their own operational More ❯
for senior leadership as needed. 2. Technical Leadership AWS Expertise : Hands-on experience with AWS services, scalable data solutions, and pipeline design. Strong coding skills in Python , SQL , and pySpark . Optimize data platforms and enhance operational efficiency through innovative solutions. Nice to Have : Background in software delivery, with a solid grasp of CI/CD pipelines and DataOps More ❯
platform to solve their most complex operational challenges. You'll design and implement scalable generative AI workflows, often using technologies like Palantir AIP, while building robust data pipelines with PySpark, Python, and SQL. A key responsibility is executing sophisticated data integrations across distributed systems and enterprise environments, including ERPs and CRMs. You'll collaborate closely with client stakeholders to More ❯
recommendation systems. Proven leadership experience with the ability to guide projects from conception to deployment. Advanced proficiency in PyTorch or TensorFlow Strong proficiency in SQL, Python, and distributed processing (PySpark). Strong communication skills with the ability to translate complex data insights into actionable business strategies. Preferred: Experience in AdTech. Familiarity with MLFlow, Databricks and Azure DevOps. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
requirements.* Ensure best practices in data governance, security, and compliance. Key Skills & Experience* Proven experience as an Azure Data Engineer.* Strong hands-on expertise with Databricks - 5+ years experience (PySpark, notebooks, clusters, Delta Lake).* Solid knowledge of Azure services (Data Lake, Synapse, Data Factory, Event Hub).* Experience working with DevOps teams and CI/CD pipelines.* Ability More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows … using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse/SQL Pools Azure Key Vault More ❯
newton-le-willows, north west england, united kingdom Hybrid/Remote Options
Linaker
including complex formulas & data visualization. Knowledge of Data Warehouse/Data Lake architectures and technologies. Strong working knowledge of a language for data analysis and scripting, such as Python, Pyspark, R, Java, or Scala. Experience with any of the following would be desirable but not essential; Microsoft's Fabric data platform, Experience with ADF such as managing pipelines, APIMore ❯
company for the duration of the contract. You must have several years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will also have a number of years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines. ETL process expertise is essential. More ❯
in a quantitative discipline (computer science, maths, engineering), with an expected graduation date of Spring 2027 or later Strong proficiency in SQL and Python (with an emphasis on Pandas, PySpark, etc.) Excel skills and visualisation skills are preferred Excellent attention to detail, organisation, and project management skills Strong verbal and written communication skills Commitment to the highest ethical standards More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineer, you will: Digest More ❯
profiling, ingestion, collation and storage of data for critical client projects. How to develop and enhance your knowledge of agile ways of working and working in open source stack (PySpark/PySql). Quality engineering professionals utilise Accenture delivery assets to plan and implement quality initiatives to ensure solution quality throughout delivery. As a Data Engineering Manager, you will More ❯
on-premises databases and cloud). Maintain/implement data warehousing solutions and manage large-scale data storage systems (e.g. Microsoft Fabric) Build and optimise SQL queries, stored procedures, PySpark notebooks and database objects to ensure data performance and reliability. Migrate and modernise legacy databases to cloud-based architectures. Database Administration Administer, monitor, and optimise database systems (e.g. SQL … level SQL and database design (normalisation, indexing, query optimisation). Strong experience with ETL/ELT tools. E.g. Azure Data Factory, Databricks, Synapse Pipelines, SSIS, etc. Experience with Python, PySpark, or Scala for data processing. Familiarity with CI/CD practices. Experience with Data lake, Data warehouse and Medallion architectures. Understanding of API integrations and streaming technologies (event hub More ❯
Job Title: Data Quality Engineer Work Location: Cardiff, UK (Twice a month) The Role: Data Quality Engineer Responsibilities: As part of a multi-discipline team challenged with building a cloud data platform, you will be responsible for ensuring the quality More ❯
money laundering, and financial crime across global platforms. The role includes direct line management of 5 engineers. We are looking for: Strong Full Stack development skills on Python and PySpark, Typescript (ideally also Node.JS/React.JS) and AWS (Or other cloud provider) as a Technical Lead or Senior Engineer. Line Management experience. The client are looking to offer up More ❯
a best-in-class Lakehouse from scratch-this is the one. ? What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using … growing data function. ? Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric ? What We're Looking For Experience 5-8+ years of Data Engineering … with 2-3+ years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding More ❯