experience in Data Management, Data Integration, Data Quality, Data Monitoring, and Analytics. Experience leading technologist teams and managing global stakeholders. Proficiency in Python and PySpark for data engineering. Experience building cloud-native applications on platforms such as AWS, Azure, GCP, leveraging cloud services for data storage, processing, and analytics. More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
Ibstock, England, United Kingdom Hybrid / WFH Options
Ibstock Plc
data platform. Knowledge, Skills and Experience: Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
London, England, United Kingdom Hybrid / WFH Options
Noir
on your skills and experience — talk with your recruiter to learn more. Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to More ❯
SnowPro certification (Core/Advanced). Experience with GCP (dataflow and BigQuery) Experience with Azure services (Synapse, Data Factory, Logic Apps). Familiarity with PySpark for distributed data processing. Experience creating CI/CD pipelines using tools such as GitHub Actions. Knowledge of Terraform for Infrastructure as Code. Experience More ❯
modelling concepts. Experience with Azure Synapse Analytics. Understanding of streaming data ingestion processes. Ability to develop/manage Apache Spark data processing applications using PySpark on Databricks. Experience with version control (e.g., Git), DevOps, and CI/CD. Experience with Python. Experience with Microsoft data platform, Microsoft Azure stack More ❯
DynamoDB, or Cassandra. Cloud Infrastructure:Architect and manage AWS backend services using EC2, ECS, S3, Lambda, RDS, and CloudFormation. Big Data Integration (Desirable):Leverage PySpark for distributed data processing and scalable ETL workflows in data engineering pipelines. Polyglot Collaboration:Integrate with backend services or data processors developed in Java More ❯
You’ll build robust data infrastructure to enable smarter audit and risk insights. You’ll design scalable ETL/ELT pipelines in Python (with PySpark) and orchestrate them using tools like Databricks and Snowflake. You’ll work with structured and unstructured data across the firm, integrating APIs, batch loads More ❯
including code quality, documentation, and security. Requirements: Strong Python programming skills: Experience writing and debugging complex Python code, including experience with libraries like Pandas, PySpark, and related data science libraries. Experience with Apache Spark and Databricks: Deep understanding of Apache Spark principles and experience with Databricks notebooks, clusters, and More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
team Experiment in your domain to improve precision, recall, or cost savings Requirements Expert skills in Java or Python Experience with Apache Spark or PySpark Experience writing software for the cloud (AWS or GCP) Speaking and writing in English enables you to take part in day-to-day conversations More ❯
machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL More ❯
London, England, United Kingdom Hybrid / WFH Options
SR2 | Socially Responsible Recruitment | Certified B Corporation™
5+ years in a Data Engineering position. Strong experience with building data pipelines in the cloud (AWS, Azure or GCP). Excellent knowledge of PySpark, Python and SQL fundamentals. Familiar with Airflow, Databricks and/or BigQuery. Ability to work on messy, complex real-world data challenges. Comfortable working More ❯
City of London, England, United Kingdom Hybrid / WFH Options
Noir
Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to innovation and sustainable solutions. Our client are seeking a talented Data Engineer More ❯
London, England, United Kingdom Hybrid / WFH Options
Bounce Digital
and external (eBay APIs) sources Define data quality rules, set up monitoring/logging, and support architecture decisions What You Bring Strong SQL & Python (PySpark); hands-on with GCP or AWS Experience with modern ETL tools (dbt, Airflow, Fivetran) BI experience (Looker, Power BI, Metabase); Git and basic CI More ❯
Cardiff, Wales, United Kingdom Hybrid / WFH Options
Creditsafe
business-ready" data that powers our products. We work in an Agile environment using modern tools and technologies including AWS, Glue, Step Functions, Athena, PySpark, SQL, and Python. Our processes are metadata-driven to ensure scalable, consistent, and reliable data delivery. Your Role You’ll work closely with the More ❯
Redshift/BigQuery) (Required) Experience with infrastructure as code (e.g. Terraform) (Required) Proficiency in using Python both for scheduling (e.g. Airflow) and manipulating data (PySpark) (Required) Experience building deployment pipelines (e.g. Azure Pipelines) (Required) Deployment of web apps using Kubernetes (Preferably ArgoCD & Helm) (Preferred) Experience working on Analytics and More ❯
Staines-upon-Thames, England, United Kingdom Hybrid / WFH Options
Novuna
Good technical knowledge of Azure Databricks, Azure Data Factory, Terraform and Azure DevOps Strong grounding in both SQL and Python with extensive experience using PySpark Excellent communication skills and the ability to collaborate with different stakeholders and disciplines Ability to mentor and coach other members of the team What More ❯
London, England, United Kingdom Hybrid / WFH Options
Funding Circle
and architecture Maintain comprehensive technical documentation for data systems, processes, and tools What we're looking for Strong proficiency in SQL/Python/Pyspark and/or other languages relevant to data processing Experience with Infrastructure as code e.g Terraform, for managing cloud infrastructure Experience designing, implementing, and More ❯
London, England, United Kingdom Hybrid / WFH Options
Funding Circle
and architecture Maintain comprehensive technical documentation for data systems, processes, and tools What we're looking for Strong proficiency in SQL/Python/Pyspark and/or other languages relevant to da processing Experience with Infrastructure as code e.g Terraform, for managing cloud infrastructure Experience designing, implementing, and More ❯