US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data More ❯
US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data More ❯
technical subjects. You have experience with Cloud Providers: Proficiency in AWS, Google Cloud Platform, or Azure. What Would Make You Stand Out Experience with PySpark and structured streaming. Experience with orchestrating complex workflows using tools such as Airflow, Dagster or Prefect. Familiarity with infrastructure as code and with CI More ❯
ownership for designing and building innovative data solutions. Work with a mix of cloud services (largely AWS and Snowflake), from a core of Python, PySpark and SQL, to bring together best-in-class technologies to meet our clients’ needs. Shape the development and rollout of cutting-edge analytics programmes More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
Mirai Talent
managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. More ❯
London, England, United Kingdom Hybrid / WFH Options
DATAPAO
it take to fit the bill? Technical Expertise 5+ years in Data Engineering , focusing on cloud platforms (AWS, Azure, GCP); Proven experience with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); Extensive ETL/ELT and data pipeline orchestration experience (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, Step Functions); Proficiency More ❯
London, England, United Kingdom Hybrid / WFH Options
Datapao
years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
discovery or translational research. Hands-on work with molecular structure data, computed properties, simulation outputs, or imaging datasets. Proficiency in Python (including Pandas or PySpark) and SQL, with exposure to ETL/orchestration tools such as Airflow or dbt. Strong knowledge of cloud-native services on AWS (e.g., S3 More ❯
and retrieval. Expertise in working with agile projects to automated testing/dev ops environments. Knowledge of big data technologies such Apache Spark or Pyspark Hands-on experience with containerization technologies like Docker and Kubernetes (EKS). Ability to guide and coach teams on approach to achieve goals aligned More ❯
as Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
as Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
as Flask, Django, or FastAPI. Proficiency in Python 3.x and libraries like Pandas, NumPy, and Dask. Experience with data manipulation and processing frameworks (e.g., PySpark, Apache Beam). Strong knowledge of databases, including SQL and NoSQL (e.g., PostgreSQL, MongoDB). Familiarity with ETL processes and tools such as Airflow More ❯
reaching this vision, instilling a culture of quality, reliability and innovation in the team. Develop and deploy automated ETL/ELT pipelines using Python, PySpark and SQL, to bring together best-in-class technologies to meet our clients’ needs. Design Data and Solution Architectures, supporting in assuring they’re More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
Mirai Talent
managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. More ❯
surfacing issues. Qualifications We seek experienced Data Engineers passionate about data, eager to implement best practices in a dynamic environment. Proficiency in Spark/PySpark, Azure data technologies, Python or Scala, SQL. Experience with testing frameworks like pytest or ScalaTest. Knowledge of open table formats such as Delta, Iceberg More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
Scott Logic
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering, including Front End technologies like JavaScript. You’re a problem-solver, pragmatically exploring options More ❯
London, England, United Kingdom Hybrid / WFH Options
Scott Logic Ltd
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You’ve got a background in software engineering. You’re a problem-solver, pragmatically exploring options and finding effective solutions. An understanding More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Advisory Board meetings. What We're Looking for in You Experience of working as a Data Engineer. Highly proficient in SQL, Python and Spark (pyspark) for developing and testing data engineering pipelines and products to ingest and transform structured and semi-structured data. Understanding of data modelling techniques and More ❯
and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯