years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Peaple Talent
various departments to gather requirements and ensure data solutions reflect real business needs. Key Experience Required: Deep expertise in SQL, Python, and Spark (particularly PySpark) for building and testing end-to-end pipelines in that process both structured and semi-structured datasets. Experience mentoring peers and supporting team growth More ❯
Monitor, troubleshoot, and continuously improve data workflows in production environments. Essential Skills & Experience: Hands-on experience with Databricks and Apache Spark . Proficiency in PySpark and SQL . Strong understanding of data lakehouse concepts , data modelling, and data warehousing. Familiarity with one or more cloud platforms ( Azure , AWS , or More ❯
Delta Lake/Databricks), PL/SQL, Java/J2EE, React, CI/CD pipeline, and release management. Strong experience in Python, Scala/PySpark, PERL/scripting. Experience as a Data Engineer for Cloud Data Lake activities, especially in high-volume data processing frameworks, ETL development using distributed More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
to store and process data. Document workflows, pipelines, and transformation logic for transparency. Key Skills & Experience: Strong hands-on experience in Python (Pandas, NumPy, PySpark). Experience building ETL/ELT processes. Familiarity with cloud platforms (AWS, Azure, GCP) and big data technologies (e.g., Snowflake, Databricks). Understanding of More ❯
The ability to problem-solve. Knowledge of AWS or equivalent cloud technologies. Knowledge of Serverless technologies frameworks and best practices. Apache Spark (Scala or Pyspark) Experience using AWS CloudFormation or Terraform for infrastructure automation. Knowledge of Scala or 00 language such as Java or C#. SQL or Python development More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
2+ years in a senior role Deep expertise with AWS services (S3, Glue, Lambda, SageMaker, Redshift, etc.) Strong Python and SQL skills; experience with PySpark a bonus Familiarity with containerization (Docker), orchestration (Airflow, Step Functions), and infrastructure as code (Terraform/CDK) Solid understanding of machine learning model lifecycle More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks More ❯
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
Engineering. Develop customer relationships and build internal partnerships with account executives and teams. Prior experience with coding in a core programming language (i.e., Python, PySpark, or SQL) and willingness to learn a base level of Spark. Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs More ❯
propositions. Develop customer relationships and build internal partnerships with account executives and teams. Prior experience with coding in a core programming language (i.e., Python, PySpark or SQL) and willingness to learn a base level of Spark. Hands-on expertise with complex proofs-of-concept and public cloud platforms (AWS More ❯
/Experience: The ideal candidate will have the following: Proven experience leading data engineering projects, especially cloud migration initiatives. Hands-on experience with Python, PySpark, SQL, and Scala. Deep knowledge of modern data architecture, data modeling, and ELT/ETL practices. Strong grasp of data security, governance, and compliance More ❯
environment with one or more modern programming languages and database querying languages Proficiency in coding one or more languages such as Java, Python or PySpark Experience in Cloud implementation with AWS Data Services, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda, Step Functions, Event Bridge, ECS, Data De More ❯
finance or equivalent quantitative field - Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes - Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight to drive More ❯