Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
london (city of london), south east england, united kingdom
HCLTech
data retention and archival strategies in cloud environments. Strong understanding and practical implementation of Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing. Advanced programming skills in Python, PySpark, and SQL, with the ability to build modular, efficient, and scalable data pipelines. Deep expertise in data modeling for both relational databases and data warehouses, including Star and Snowflake More ❯
related field (or equivalent experience) 3-5 years of experience in data engineering (healthcare/medical devices preferred but not required) Strong Python programming and data engineering skills (Pandas, PySpark, Dask) Proficiency with databases (SQL/NoSQL), ETL processes, and modern data frameworks (Apache Spark, Airflow, Kafka) Solid experience with cloud platforms (AWS, GCP, or Azure) and CI/ More ❯
Databricks, or equivalent) Proficiency in ELT/ETL development using tools such as Data Factory, Dataflow Gen2, Databricks Workflows, or similar orchestration frameworks Experience with Python and/or PySpark for data transformation, automation, or pipeline development Familiarity with cloud services and deployment automation (e.g., Azure, AWS, Terraform, CI/CD, Git) Ability to deliver clear, insightful, and performant More ❯
Databricks, or equivalent) Proficiency in ELT/ETL development using tools such as Data Factory, Dataflow Gen2, Databricks Workflows, or similar orchestration frameworks Experience with Python and/or PySpark for data transformation, automation, or pipeline development Familiarity with cloud services and deployment automation (e.g., Azure, AWS, Terraform, CI/CD, Git) Ability to deliver clear, insightful, and performant More ❯
client value and broaden relationships at senior levels with current and prospective clients. Our Tech Stack Cloud: Azure, sometimes GCP & AWS Data Platform: Databricks, Snowflake, BigQuery Data Engineering tools: Pyspark, Polars, DuckDB, Malloy, SQL Infrastructure-as-code: Terraform, Pulumi Data Management and Orchestration: Airflow, dbt Databases and Data Warehouses: SQL Server, PostgreSQL, MongoDB, Qdrant, Pinecone GenAI: OpenAI APIs, HuggingFace More ❯
delivering enterprise-grade data platforms on GCP, AWS, or Azure Deep expertise in data modeling, data warehousing, distributed systems, and modern data lake architectures Advanced proficiency in Python (including PySpark) and SQL, with experience building scalable data pipelines and analytics workflows Strong background in cloud-native data infrastructure (e.g., BigQuery, Redshift, Snowflake, Databricks) Demonstrated ability to lead teams, set More ❯
West London, London, United Kingdom Hybrid / WFH Options
Young's Employment Services Ltd
on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in Python, Pyspark, Scala or Java. Experience operating in a cloud-native environment (e.g. Fabric, AWS, GCP, or Azure). Excellent stakeholder management and communication skills. A strategic mindset, with a practical More ❯
record in data integration, ETL processes, and optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational More ❯
an impact, get in touch ASAP as interviews are already taking place. Don't miss out! Key Skills: AWS, Data, Architecture, Data Engineering, Data Warehousing, Data Lakes, Databricks, Glue, Pyspark, Athena, Python, SQL, Machine Learning, London More ❯
enablement, and stakeholder engagement Ideal Skillset Strong SQL and data modelling (dimensional, 3NF) Experience with modern data platforms: Fabric, Databricks, Synapse, or similar Proficiency in Python and/or PySpark for transformation and orchestration Familiarity with orchestration tools like Data Factory, Dataflow Gen2, or Databricks Workflows Expertise in dashboard performance tuning, DirectQuery, and incremental refresh Cloud-native mindset: Azure More ❯
Wiltshire, England, United Kingdom Hybrid / WFH Options
Data Science Talent
and DevOps and collaborate with the Software Delivery Manager and data engineering leadership. What you'll need Hands-on Databricks experience Strong Azure Cloud knowledge Proficient in SQL, Python, PySpark ETL & pipeline design (Matillion preferred, alternatives acceptable) Practical data modelling & pipeline architecture Terraform or Bicep for IaC About the company The company is one of the longest-established financial More ❯
swindon, wiltshire, south west england, united kingdom Hybrid / WFH Options
Data Science Talent
and DevOps and collaborate with the Software Delivery Manager and data engineering leadership. What you'll need Hands-on Databricks experience Strong Azure Cloud knowledge Proficient in SQL, Python, PySpark ETL & pipeline design (Matillion preferred, alternatives acceptable) Practical data modelling & pipeline architecture Terraform or Bicep for IaC About the company The company is one of the longest-established financial More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
including Azure DevOps or GitHub Considerable experience designing and building operationally efficient pipelines, utilising core Cloud components, such as Azure Data Factory, Big Query, AirFlow, Google Cloud Composer and Pyspark etc Proven experience in modelling data through a medallion-based architecture, with curated dimensional models in the gold layer built for analytical use Strong understanding and or use of More ❯
Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
engagement. * Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
engagement.* Drive innovation through advanced analytics and research-based problem solving. To be successful you should have: 10 years hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security. Proficiency More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
regular company events What you'll need: Solid experience in data engineering, management and analysis Strong experience with Azure Data Warehouse solutions and AWS Databricks platforms Exceptional Python/PySpark + additional languages for data processing Strong SQL with experience across both relational databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and More ❯
Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance More ❯
on experience with the Azure Data Stack, critically ADF and Synapse (experience with Microsoft Fabric is a plus) Highly developed python and data pipeline development knowledge, must include substantial PySpark experience Demonstrable DevOps and DataOps experience with an understanding of best practices for engineering, test and ongoing service delivery An understanding of Infrastructure as Code concepts (Demonstrable Terraform experience More ❯
For further details or to enquire about other roles, please contact Nick Mandella at Harnham. KEYWORDS Python, SQL, AWS, GCP, Azure, Cloud, Databricks, Docker, Kubernetes, CI/CD, Terraform, Pyspark, Spark, Kafka, machine learning, statistics, Data Science, Data Scientist, Big Data, Artificial Intelligence, private equity, finance. More ❯