Leeds, England, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Advisory Board meetings. What We're Looking for in You Experience of working as a Data Engineer. Highly proficient in SQL, Python and Spark (pyspark) for developing and testing data engineering pipelines and products to ingest and transform structured and semi-structured data. Understanding of data modelling techniques and More ❯
on different GCP services (e.g. Dataflow, Cloud Functions) or Azure services. Develop and maintain scalable data pipelines using GCP or Microsoft Azure services , leveraging PySpark, Python, and Databricks . The platform development is based on Python and Terraform. Furthermore you will work with SQL related technologies like Google BigQuery More ❯
and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Data Engineer working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding More ❯
experience in Data Management, Data Integration, Data Quality, Data Monitoring, and Analytics. Experience leading technologist teams and managing global stakeholders. Proficiency in Python and PySpark for data engineering. Experience building cloud-native applications on platforms such as AWS, Azure, GCP, leveraging cloud services for data storage, processing, and analytics. More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
/DataBricks), PL/SQL, Java/J2EE, React, CI/CD pipeline, and release management. Strong skills and experience in Python, Scala/PySpark, PL/SQL, PERL/scripting. Skilled Data Engineer for Cloud Data Lake activities, with industry experience (preferably in Financial Services) in building enterprise More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
of audiences. Able to provide coaching and training to less experienced members of the team. Essential skills: Programming Languages such as Spark, Java, Python, PySpark, Scala, etc (minimum 2). Extensive Big Data hands-on experience (coding/configuration/automation/monitoring/security/etc) is a More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
Ibstock, England, United Kingdom Hybrid / WFH Options
Ibstock Plc
data platform. Knowledge, Skills and Experience: Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
London, England, United Kingdom Hybrid / WFH Options
Noir
on your skills and experience — talk with your recruiter to learn more. Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to More ❯
SnowPro certification (Core/Advanced). Experience with GCP (dataflow and BigQuery) Experience with Azure services (Synapse, Data Factory, Logic Apps). Familiarity with PySpark for distributed data processing. Experience creating CI/CD pipelines using tools such as GitHub Actions. Knowledge of Terraform for Infrastructure as Code. Experience More ❯
modelling concepts. Experience with Azure Synapse Analytics. Understanding of streaming data ingestion processes. Ability to develop/manage Apache Spark data processing applications using PySpark on Databricks. Experience with version control (e.g., Git), DevOps, and CI/CD. Experience with Python. Experience with Microsoft data platform, Microsoft Azure stack More ❯
DynamoDB, or Cassandra. Cloud Infrastructure:Architect and manage AWS backend services using EC2, ECS, S3, Lambda, RDS, and CloudFormation. Big Data Integration (Desirable):Leverage PySpark for distributed data processing and scalable ETL workflows in data engineering pipelines. Polyglot Collaboration:Integrate with backend services or data processors developed in Java More ❯
You’ll build robust data infrastructure to enable smarter audit and risk insights. You’ll design scalable ETL/ELT pipelines in Python (with PySpark) and orchestrate them using tools like Databricks and Snowflake. You’ll work with structured and unstructured data across the firm, integrating APIs, batch loads More ❯
including code quality, documentation, and security. Requirements: Strong Python programming skills: Experience writing and debugging complex Python code, including experience with libraries like Pandas, PySpark, and related data science libraries. Experience with Apache Spark and Databricks: Deep understanding of Apache Spark principles and experience with Databricks notebooks, clusters, and More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯