US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data More ❯
Strong skills in data modeling, database design, and query optimization for relational and non-relational databases Hands-on experience with Apache Airflow, Databricks, and PySpark for orchestration and scalable data processing Familiarity with CI/CD pipelines and automation tools Proven ability to lead complex technical projects and make More ❯
on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles More ❯
on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles More ❯
on experience as a Data Engineer working with Databricks and Apache Spark Strong programming skills in Python , with experience in data manipulation libraries (e.g., PySpark, Spark SQL) Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables Solid understanding of data warehousing principles More ❯
Bachelor's degree in Software Engineering, Computer Science, Business Analytics, or related field. 3+ years of hands-on experience in Data Engineering using Python (PySpark preferred), Scala, or Java along with strong T-SQL proficiency. Demonstrable experience with Azure Databricks, Apache Airflow, Delta Lake, Terraform, and Azure DevOps pipelines. More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
Mirai Talent
managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. More ❯
warrington, cheshire, north west england, united kingdom Hybrid / WFH Options
Mirai Talent
managing data infrastructure, and supporting data products across various cloud environments, primarily Azure. Key Responsibilities: Develop end-to-end data pipelines using Python, Databricks, PySpark, and SQL. Integrate data from various sources including APIs, Excel, CSV, JSON, and databases. Manage data lakes, warehouses, and lakehouses within Azure cloud environments. More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
on different GCP services (e.g. Dataflow, Cloud Functions) or Azure services. Develop and maintain scalable data pipelines using GCP or Microsoft Azure services , leveraging PySpark, Python, and Databricks . The platform development is based on Python and Terraform. Furthermore you will work with SQL related technologies like Google BigQuery More ❯
and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
london (city of london), south east england, united kingdom
Zodiac Maritime
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL (advanced query optimization). Experience building scalable ETL pipelines and data transformations. Knowledge of data quality frameworks and monitoring. Experience with Git More ❯
Data Engineer - Leading Energy Company - London (Tech Stack: Data Engineer, Databricks, Python, PySpark, Power BI, AWS QuickSight, AWS, TSQL, ETL, Agile Methodologies) Company Overview: Join a dynamic team, a leading player in the energy sector, committed to innovation and sustainable solutions. Our client are seeking a talented Data Engineer More ❯