DevOps and Infrastructure-as-Code (ideally using Terraform) Take ownership of system observability, stability, and documentation Requirements Strong experience in Python (especially Pandas and PySpark) and SQL Proven expertise in building data pipelines and working with Databricks and Lakehouse environments Deep understanding of Azure (or similar cloud platforms), including More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data platforms using Google Cloud Platform Hands-on experience with GCP tools: BigQuery, Dataform, Dataproc, Composer, Pub/Sub Strong programming skills in Python, PySpark , and SQL Deep understanding of data engineering concepts, including ETL, data warehousing, and cloud storage Strong communication skills with the ability to collaborate across More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
City of London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data platforms using Google Cloud Platform Hands-on experience with GCP tools: BigQuery, Dataform, Dataproc, Composer, Pub/Sub Strong programming skills in Python, PySpark , and SQL Deep understanding of data engineering concepts, including ETL, data warehousing, and cloud storage Strong communication skills with the ability to collaborate across More ❯
as Data Factory, Databricks, Synapse (DWH), Azure Functions, and other data analytics tools, including streaming. Experience with Airflow and Kubernetes. Programming skills in Python (PySpark) and scripting languages like Bash. Knowledge of Git, CI/CD operations, and Docker. Basic PowerBI knowledge is a plus. Experience deploying cloud infrastructure More ❯
City of London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
Data factory, Databricks, Synapse (DWH), Azure Functions, App logic and other data analytics services, including streaming. Experience with Airflow and Kubernetes. Programming languages: Python (PySpark), scripting languages like Bash. Knowledge of Git, CI/CD operations and Docker. Basic knowledge of PowerBI is a plus. Experience deploying cloud infrastructure More ❯
London, England, United Kingdom Hybrid / WFH Options
Anson McCade Pty
data warehouses and data lakes. Expertise in GCP data services including BigQuery, Composer, Dataform, DataProc, and Pub/Sub. Strong programming experience with Python, PySpark, and SQL. Hands-on experience with data modelling, ETL processes, and data quality frameworks. Proficiency with BI/reporting tools such as Looker or More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
Penryn, England, United Kingdom Hybrid / WFH Options
Aspia Space
availability and integrity. •3+ years of experience in data engineering, data architecture, or similar roles. •Expert proficiency in Python, including popular data libraries (Pandas, PySpark, NumPy, etc.). •Strong experience with AWS services—specifically S3, Redshift, Glue (Athena a plus). •Solid understanding of applied statistics. •Hands-on experience More ❯
within the organization. Essential Technical Skills Experience in designing and developing data warehouse solutions. Building and configuring multistage Azure deployment pipelines Advanced SQL, Python, PySpark Azure Data Lake, Data Factory, Databricks and Functions Azure Data Lake Storage Gen2 Traditional ETL (Informatica, SSIS etc.) Azure Powershell/CLI Azure Key More ❯
environments and priorities, maintaining effectiveness in dynamic situations · Proficiency using SQL Server in a highly transactional environment. · Experience in either C# or Python/PySpark for data engineering or development tasks. · Strong understanding of DevOps principles and experience with relevant tools e.g., Azure DevOps, Git, Terraform for CI/ More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
A A A grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
team solving real-world trading challenges ? What We’re Looking For 8+ years of professional experience in Python application development Solid knowledge of Pandas, PySpark, and modern testing (PyTest) Strong background in Azure cloud services (Databricks, ADF, Key Vaults, etc.) Familiarity with DevOps, CI/CD pipelines, and Agile More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
team solving real-world trading challenges ? What We’re Looking For 8+ years of professional experience in Python application development Solid knowledge of Pandas, PySpark, and modern testing (PyTest) Strong background in Azure cloud services (Databricks, ADF, Key Vaults, etc.) Familiarity with DevOps, CI/CD pipelines, and Agile More ❯
on a multi-year digital transformation journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, Delta Lake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Mars
on a multi-year digital transformation journey where your work will unlock real impact. 🌟 What you'll do Build robust data pipelines using Python, PySpark, and cloud-native tools Engineer scalable data models with Databricks, Delta Lake, and Azure tech Collaborate with analysts, scientists, and fellow engineers to deliver More ❯
patent data. Work with stakeholders across teams to identify key areas for AI-driven innovation and enhancement in data products . Use Python , SQL , PySpark and related technologies to develop scalable solutions, focusing on large-scale data processing. Qualifications: Demonstrate 3+ years of experience in data science , with a More ❯
CI/CD tools, and full software development lifecycle. Proficiency with Azure Databricks, Data Factory, Storage, Key Vault, Git, SQL (Spark SQL), and Python (PySpark). Certifications: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900). Curiosity about technology, adaptability, agility, and a collaborative mindset. Willingness to mentor More ❯
issues Contribute to new Finance Engineering initiatives What We’re Looking For: Excellent programming skills in Python and strong working knowledge of Pandas and PySpark libraries Excellent knowledge of relational databases and SQL Experience with integration of multiple platforms Experience with ETL processes Experience providing first-line support for More ❯
houses. Advanced understanding and experience with file storage layer management in data lake environment, including parquet and delta file formats. Solid experience with SPARK (PySpark) language, and data processing techniques. Solid Understanding of and experience with AZURE SYNAPSE tools and services. Some knowledge of Python preferred. Strong analytic skills More ❯
GenAI models and building agent AI systems Our technology stack Python and associated ML/DS libraries (Scikit-learn, Numpy, LightlGBM, Pandas, TensorFlow, etc...) PySpark AWS cloud infrastructure: EMR, ECS, S3, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow, Jenkins More Information Enjoy fantastic perks like private healthcare & dental insurance More ❯
London, England, United Kingdom Hybrid / WFH Options
Our Future Health UK
Trusted Research Environments (TREs). We’re looking for candidates with strong experience in Python and cloud-native data processing and storage technologies, especially PySpark/Databricks, K8s, Postgres, Dagster, and Azure. If you have solid experience in similar technologies and are looking to widen your knowledge and experience More ❯
architectures with a focus on automation, performance tuning, cost optimisation, and system reliability. Proven proficiency in programming languages such as Python, T-SQL, and PySpark, with practical knowledge of test-driven development. Demonstrated capability in building secure, scalable data solutions on Azure with an in-depth understanding of data More ❯