Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
Europe, the UK and the US. ABOUT THE ROLE Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support More ❯
Good experience with ETL - SSIS, SSRS, T-SQL (On-prem/Cloud) Strong proficiency in SQL and Python for handling complex data problems Hands-on experience with Apache Spark (PySpark or Spark SQL) Experience with the Azure data stack Knowledge of workflow orchestration tools like Apache Airflow Experience with containerisation technologies like Docker Proficiency in dimensional modelling techniques Experience More ❯
data engineering and reporting. Including storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured, maintainable systems. Strong communication skills More ❯
Champion clean code, data lifecycle optimisation, and software engineering best practices What We're Looking For Proven hands-on experience with Databricks platform and orchestration Strong skills in Python, PySpark, and SQL, with knowledge of distributed data systems Expertise in developing full lifecycle data pipelines across ingestion, transformation, and serving layers Experience with data lakehouse architecture, schema design, and More ❯
across the team. Skills & Experience Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse. Strong understanding of Lakehouse architecture and medallion design patterns. Proficient in Python, PySpark, and SQL, with advanced query optimisation skills. Proven experience building scalable ETL pipelines and managing data transformations. Familiarity with data quality frameworks and monitoring tools. Experience working with Git More ❯
exposure to Natural Gas and Power markets, balancing mechanisms, and regulatory frameworks (e.g., REMIT, EMIR). Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as More ❯
Databricks, Azure Data Lake Storage, Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline observability dashboards, ensuring high data More ❯
both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of SQL , data modelling , and lakehouse architecture Experience deploying via CI/CD More ❯
Required Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Strong proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and More ❯
or senior technical role. Proven experience in energy trading environments, particularly Natural Gas and Power markets. Expert in Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as More ❯
business requirements into data solutions Monitor and improve pipeline performance and reliability Maintain documentation of systems, workflows, and configs Tech environment Python, SQL/PLSQL (MS SQL + Oracle), PySpark Apache Airflow (MWAA), AWS Glue, Athena AWS services (CDK, S3, data lake architectures) Git, JIRA You should apply if you have: Strong Python and SQL skills Proven experience designing More ❯
Collaborate with cross-functional teams to translate business needs into technical solutions. Core Skills Cloud & Platforms : Azure, AWS, SAP Data Engineering : ELT, Data Modeling, Integration, Processing Tech Stack : Databricks (PySpark, Unity Catalog, DLT, Streaming), ADF, SQL, Python, Qlik DevOps : GitHub Actions, Azure DevOps, CI/CD pipelines Please click here to find out more about our Key Information Documents. More ❯
Collaborate with cross-functional teams to translate business needs into technical solutions. Core Skills Cloud & Platforms : Azure, AWS, SAP Data Engineering : ELT, Data Modeling, Integration, Processing Tech Stack : Databricks (PySpark, Unity Catalog, DLT, Streaming), ADF, SQL, Python, Qlik DevOps : GitHub Actions, Azure DevOps, CI/CD pipelines Please click here to find out more about our Key Information Documents. More ❯
Collaborate with cross-functional teams to translate business needs into technical solutions. Core Skills Cloud & Platforms : Azure, AWS, SAP Data Engineering : ELT, Data Modeling, Integration, Processing Tech Stack : Databricks (PySpark, Unity Catalog, DLT, Streaming), ADF, SQL, Python, Qlik DevOps : GitHub Actions, Azure DevOps, CI/CD pipelines Please click here to find out more about our Key Information Documents. More ❯
related field with over 15 years of experience. Strong background in System Integration, Application Development, or Data-Warehouse projects across enterprise technologies. Experience with Object-oriented languages (e.g., Python, PySpark) and frameworks. Expertise in relational and dimensional modeling, including big data technologies. Proficiency in Microsoft Azure components like Azure Data Factory, Data Lake, SQL, DataBricks, HD Insights, ML Service. More ❯
joining data from various sources. About the role The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in PythonPySpark and SQL. You will have expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address More ❯
experience with Azure services such as Data Factory, Databricks, Synapse (DWH), Azure Functions, and other data analytics tools, including streaming. Experience with Airflow and Kubernetes. Programming skills in Python (PySpark) and scripting languages like Bash. Knowledge of Git, CI/CD operations, and Docker. Basic PowerBI knowledge is a plus. Experience deploying cloud infrastructure is desirable. Understanding of Infrastructure More ❯
like Retrieval-Augmented Generation (RAG) and natural language analytics. What we are looking for in our candidate Essential Proficiency in Python and SQL, with experience in frameworks like Pandas, PySpark, and NumPy for large-scale data processing. Expertise in debugging and optimising distributed systems with a focus on scalability and reliability. Proven ability to design and implement scalable, fault More ❯
The ideal Data Products Engineer will have: Few years of professional software engineering experience, ideally in B2B or data product environments. Deep experience with Python, including libraries like Polars, PySpark, and frameworks such as FastAPI or Fastify and up-to-date with modern Python best practices, including tools such as Ruff, UV and PyEnv. Experience working on data-heavy More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention to detail. More ❯
AWS Data Engineer London, UK Permanent Strong experience in Python, PySpark, AWS S3, AWS Glue, Databricks, Amazon Redshift, DynamoDB, CI/CD and Terraform. Total 7 + years of experience in Data engineering is required. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming More ❯
processes and provide training on data tools and workflows. Skills and experience • Experience in building ELT/ETL pipelines and managing data workflows. • Proficiency in programming languages such as PySPark, Python, SQL, or Scala. • Solid understanding of data modelling and relational database concepts. • Knowledge of GDPR and UK data protection regulations. Preferred Skills: • Experience with Power BI for data More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯