Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. More ❯
machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
team solving real-world trading challenges ? What We’re Looking For 8+ years of professional experience in Python application development Solid knowledge of Pandas, PySpark, and modern testing (PyTest) Strong background in Azure cloud services (Databricks, ADF, Key Vaults, etc.) Familiarity with DevOps, CI/CD pipelines, and Agile More ❯
Work closely with data scientists and stakeholders Follow CI/CD and code best practices (Git, testing, reviews) Tech Stack & Experience: Strong Python (Pandas), PySpark, and SQL skills Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.) Data integration experience across formats and platforms Strong communication and data literacy More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Recruit with Purpose
their data. Overview of responsibilities in the role: Design and maintain scalable, high-performance data pipelines using Azure Data Platform tools such as Databricks (PySpark), Data Factory, and Data Lake Gen2. Develop curated data layers (bronze, silver, gold) optimised for analytics, reporting, and AI/ML, ensuring they meet More ❯
Horsham, England, United Kingdom Hybrid / WFH Options
SEGA
valuable insights. Desirable Experience working with enterprise data warehouse or data lake platforms. Experience working with a cloud platform such as AWS. Have used PySpark for data manipulation. Previous exposure to game or IoT telemetry events and how such data is generated. Knowledge of best practices involving data governance More ❯
South East London, England, United Kingdom Hybrid / WFH Options
un:hurd music
data pipelines by collecting high-quality, consistent data from external APIs and ensuring seamless incorporation into existing systems. Big Data Management and Storage : Utilize PySpark for scalable processing of large datasets, implementing best practices for distributed computing. Optimize data storage and querying within a data lake environment to enhance More ❯
System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. More ❯
coding practices. Required Technical Skills: Proven experience in data warehouse architecture and implementation. Expertise in designing and configuring Azure-based deployment pipelines. SQL, Python, PySpark Azure Data Lake+ Databricks Traditional ETL tool This is an excellant opportunity for a talented Senior Data Engineer to join a business who are More ❯
global energies trading client in London for a Principal Data Engineer who can offer demonstrable experience in: *Technologies* - Databricks (DLT, Performance Tunning, Cost Optimization ), PySpark, Python, SQL, ADF. *Capabilities* leading a data engineering team, being technically hands on and drive a project to completion, experience running a scrum team. More ❯
relational databases Experience of Azure Fabric and its use in Data engineering and Data management. A high degree of proficiency with tools like Terraform, PySpark, and Databricks. Understanding of data migration concepts, including mapping, transformation, cleansing, and validation. Strong attention to detail and problem-solving ability. Must be comfortable More ❯
relational databases Experience of Azure Fabric and its use in Data engineering and Data management. A high degree of proficiency with tools like Terraform, PySpark, and Databricks. Understanding of data migration concepts, including mapping, transformation, cleansing, and validation. Strong attention to detail and problem-solving ability. Must be comfortable More ❯
South East London, England, United Kingdom Hybrid / WFH Options
La Fosse
Collaborate across technical and non-technical teams Troubleshoot issues and support wider team adoption of the platform What You’ll Bring: Proficiency in Python, PySpark, Spark SQL or Java Experience with cloud tools (Lambda, S3, EKS, IAM) Knowledge of Docker, Terraform, GitHub Actions Understanding of data quality frameworks Strong More ❯
and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
modify code, and update monitoring setups Communicate outages to end users What we are looking for Proficiency in reading and writing code in Python, PySpark, and Java Basic understanding of Spark Ability to navigate pipeline development tools What’s in it for you? Base salary of More ❯
sector. Experience leading technical projects. Skills & Technologies required: Proficiency in cloud-based data engineering tools (ADF, Synapse Analytics, S3, Lambda). Proficiency in using PySpark notebooks for ELT processes. Ability to foster and cultivate a culture of best practices. Strong analytical and problem-solving skills. Ability to work independently More ❯
Southampton, Hampshire, United Kingdom Hybrid / WFH Options
Zenergi
of the data team Skills, Knowledge and Expertise Skills: Proficiency in cloud based data engineering tools (ADF, Synapse Analytics, S3, Lamda) Proficiency in using PySpark notebooks for ELT. Fostering and cultivating a culture of best practices Strong analytical and problem-solving skills. Ability to work independently and as part More ❯
Analyst: Must hold experience with data analytic techniques, ideally applied to real-life objects. Must hold a year's professional experience using Python/Pyspark/Pandas. Experience with processing data and working with databases/datalakes (SQL). Strong understanding of data manipulation, analysis and processing. Ability to More ❯
2. Technical Leadership AWS Expertise : Hands-on experience with AWS services, scalable data solutions, and pipeline design. Strong coding skills in Python , SQL , and pySpark . Optimize data platforms and enhance operational efficiency through innovative solutions. Nice to Have : Background in software delivery, with a solid grasp of CI More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Proventeq
Git & GitHub (or similar) Production: Serving to APIs, e.g. Streamlit, Dash Production particularly MLOps/LLMOps Good To Have Skills Fabric Copilot, Copilot Studio PySpark, R, Julia Databricks CRM familiarity, e.g. Salesforce Able to learn new skills on the fly, staying topical in a rapidly evolving field Innovating new More ❯
in Computer Science, Statistics, Data Science, or a related field. Extensive experience in clinical data programming and data visualization. Strong experience with Python programming (PySpark, Databricks, pandas, Numpy) is mandatory. Good knowledge of SDTM standards. Strong SAS/R programming and SQL expertise. Proficiency in CDISC SDTM standards. Excellent More ❯
travel into London a few times a year) My client are looking for an individual that has the following (practical working experience needed): Python (PySpark) Apache Spark Kafka (Event Driven Architecture) **Please note, my client are only looking for candidates based within the UK** Get in touch now More ❯
Databricks Engineer Job Type: Hybrid Job Location: Windsor, UK Job Description: Good hands-on experience with Databricks, ADF, PySpark, and Azure Data Lake services. Create functional design specifications, Azure reference architectures, and assist with other project deliverables as needed. Assess and develop application modernization architecture towards re-platforming to More ❯