US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data More ❯
US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data More ❯
technical subjects. You have experience with Cloud Providers: Proficiency in AWS, Google Cloud Platform, or Azure. What Would Make You Stand Out Experience with PySpark and structured streaming. Experience with orchestrating complex workflows using tools such as Airflow, Dagster or Prefect. Familiarity with infrastructure as code and with CI More ❯
ownership for designing and building innovative data solutions. Work with a mix of cloud services (largely AWS and Snowflake), from a core of Python, PySpark and SQL, to bring together best-in-class technologies to meet our clients’ needs. Shape the development and rollout of cutting-edge analytics programmes More ❯
London, England, United Kingdom Hybrid / WFH Options
DATAPAO
it take to fit the bill? Technical Expertise 5+ years in Data Engineering , focusing on cloud platforms (AWS, Azure, GCP); Proven experience with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); Extensive ETL/ELT and data pipeline orchestration experience (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, Step Functions); Proficiency More ❯
London, England, United Kingdom Hybrid / WFH Options
Datapao
years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
as Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
as Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
reaching this vision, instilling a culture of quality, reliability and innovation in the team. Develop and deploy automated ETL/ELT pipelines using Python, PySpark and SQL, to bring together best-in-class technologies to meet our clients’ needs. Design Data and Solution Architectures, supporting in assuring they’re More ❯
surfacing issues. Qualifications We seek experienced Data Engineers passionate about data, eager to implement best practices in a dynamic environment. Proficiency in Spark/PySpark, Azure data technologies, Python or Scala, SQL. Experience with testing frameworks like pytest or ScalaTest. Knowledge of open table formats such as Delta, Iceberg More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
London, England, United Kingdom Hybrid / WFH Options
Scott Logic Ltd
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An understanding More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Mentor engineering teams and support architectural decisions as a recognised Databricks expert. Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Data Engineer working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
London, England, United Kingdom Hybrid / WFH Options
Noir
on your skills and experience — talk with your recruiter to learn more. Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to More ❯
modelling concepts. Experience with Azure Synapse Analytics. Understanding of streaming data ingestion processes. Ability to develop/manage Apache Spark data processing applications using PySpark on Databricks. Experience with version control (e.g., Git), DevOps, and CI/CD. Experience with Python. Experience with Microsoft data platform, Microsoft Azure stack More ❯
DynamoDB, or Cassandra. Cloud Infrastructure:Architect and manage AWS backend services using EC2, ECS, S3, Lambda, RDS, and CloudFormation. Big Data Integration (Desirable):Leverage PySpark for distributed data processing and scalable ETL workflows in data engineering pipelines. Polyglot Collaboration:Integrate with backend services or data processors developed in Java More ❯
including code quality, documentation, and security. Requirements: Strong Python programming skills: Experience writing and debugging complex Python code, including experience with libraries like Pandas, PySpark, and related data science libraries. Experience with Apache Spark and Databricks: Deep understanding of Apache Spark principles and experience with Databricks notebooks, clusters, and More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯