Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
Peaple Talent
various departments to gather requirements and ensure data solutions reflect real business needs. Key Experience Required: Deep expertise in SQL, Python, and Spark (particularly PySpark) for building and testing end-to-end pipelines in that process both structured and semi-structured datasets. Experience mentoring peers and supporting team growth More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Radley James
working in cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
to store and process data. Document workflows, pipelines, and transformation logic for transparency. Key Skills & Experience: Strong hands-on experience in Python (Pandas, NumPy, PySpark). Experience building ETL/ELT processes. Familiarity with cloud platforms (AWS, Azure, GCP) and big data technologies (e.g., Snowflake, Databricks). Understanding of More ❯
salisbury, south west england, United Kingdom Hybrid / WFH Options
Ascentia Partners
encryption best practices. Nice-to-Have Skills Exposure to AWS Redshift, Glue, or Snowflake. Familiarity with BigQuery and Google Analytics APIs. Proficiency in Python, PySpark, or dbt for data transformations. Background in insurance, especially in pricing analytics or actuarial data. Click Apply More ❯
a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a version-controlled environment Knowledge of parallel computing techniques More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Low Carbon Contracts Company
a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a version-controlled environment Knowledge of parallel computing techniques More ❯
machine learning, with a strong portfolio of relevant projects. Proficiency in Python with libraries like TensorFlow, PyTorch, or Scikit-learn for ML, and Pandas, PySpark, or similar for data processing. Experience designing and orchestrating data pipelines with tools like Apache Airflow, Spark, or Kafka. Strong understanding of SQL, NoSQL More ❯
application development, testing, and operational stability, especially with data pipelines. Proficiency in Python and data manipulation libraries such as NUMPY and PANDAS. Experience with PySpark, including analysis, pipeline building, tuning, and feature engineering. Knowledge of SQL and NoSQL databases, including joins, aggregations, and tuning. Experience with ETL processes and More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
Newcastle upon Tyne, Tyne & Wear Hybrid / WFH Options
Client Server
A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks More ❯
performance Utilise Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, Delta Lake, PySpark, Spark SQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming Excellent stakeholder/ More ❯
edinburgh, central scotland, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
somerset, south west england, United Kingdom Hybrid / WFH Options
CA Tech Talent
initiatives Key Requirements of the Database Engineer: Proven experience with Databricks, Azure Data Lake, and Delta Live Tables Strong programming in Python and Spark (PySpark or Scala) Solid knowledge of data modelling, warehousing, and integration concepts Comfortable working in Agile teams, with CI/CD and Azure DevOps experience More ❯
System Integration, Application Development or Data-Warehouse projects, across technologies used in the enterprise space. Software development experience using: Object-oriented languages (e.g., Python, PySpark,) and frameworks Stakeholder Management Expertise in relational and dimensional modelling, including big data technologies. Exposure across all the SDLC process, including testing and deployment. More ❯
our ambitious data initiatives and future projects. IT Manager - Microsoft Azure- The Skills You'll Need to Succeed: Mastery of Data bricks, Python/PySpark and SQL/SparkSQL. Experience in Big Data/ETL (Spark and Data bricks preferred). Expertise in Azure. Proficiency with versioning control (Git More ❯
Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline More ❯
bradford, yorkshire and the humber, united kingdom
Peregrine
Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline More ❯
london, south east england, United Kingdom Hybrid / WFH Options
83zero
similar tools is valuable (Collibra, Informatica Data Quality/MDM/Axon etc.) Data Architecture experience is a bonus Python, Scala, Databricks Spark and Pyspark with Data Engineering skills Ownership and ability to drive implementation/solution design More ❯
practices in data management and governance, guide in structuring cloud environments, and support data initiatives and future projects. Qualifications: Proficiency in Databricks, Python/PySpark, and SQL/SparkSQL. Experience with Big Data/ETL processes, preferably Spark and Databricks. Expertise in Azure cloud platform. Knowledge of version control More ❯
practices in data management and governance, guide in structuring cloud environments, and support data initiatives and future projects. Qualifications: Proficiency in Databricks, Python/PySpark, and SQL/SparkSQL. Experience with Big Data/ETL processes, preferably Spark and Databricks. Expertise in Azure cloud platform. Knowledge of version control More ❯