of data engineering experience, with 2+ years in a senior role Deep expertise with AWS services (S3, Glue, Lambda, SageMaker, Redshift, etc.) Strong Python and SQL skills; experience with PySpark a bonus Familiarity with containerization (Docker), orchestration (Airflow, Step Functions), and infrastructure as code (Terraform/CDK) Solid understanding of machine learning model lifecycle and best practices for deployment More ❯
Azure Databricks, handling ingestion from various data sources, performing complex transformations, and publishing data to Azure Data Lake or other storage services. Write efficient and standardized Spark SQL and PySpark code for data transformations, ensuring data integrity and accuracy across the pipeline. Automate pipeline orchestration using Databricks Workflows or integration with external tools (e.g., Apache Airflow, Azure Data Factory … in designing and implementing scalable ETL/ELT data pipelines in Azure Databricks, transforming raw data into usable datasets for analysis. Azure Databricks Proficiency: Strong knowledge of Spark (SQL, PySpark) for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows. Azure Data Services: Hands-on experience with Azure services like Azure Data More ❯
Bracknell, Berkshire, South East, United Kingdom Hybrid / WFH Options
Halian Technology Limited
business intelligence, reporting, and regulatory needs Lead the integration and optimisation of large-scale data platforms using Azure Synapse and Databricks Build and maintain robust data pipelines using Python (PySpark) and SQL Collaborate with data engineers, analysts, and stakeholders to ensure data quality, governance, and security Ensure all solutions adhere to financial regulations and internal compliance standards Key Skills … Experience: Proven experience as a Data Architect within the financial services sector Hands-on expertise with Azure Synapse Analytics and Databricks Strong programming and data engineering skills in Python (PySpark) and SQL Solid understanding of financial data and regulatory compliance requirements Excellent stakeholder communication and documentation skills More ❯
in designing and implementing scalable ETL/ELT data pipelines in Azure Databricks, transforming raw data into usable datasets for analysis. * Azure Databricks Proficiency: Strong knowledge of Spark (SQL, PySpark) for data transformation and processing within Databricks, along with experience building workflows and automation using Databricks Workflows. * Azure Data Services: Hands-on experience with Azure services like Azure Data More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
s/PhD in Computer Science, Data Science, Mathematics, or related field. 5+ years of experience in ML modeling, ranking, or recommendation systems . Proficiency in Python, SQL, Spark, PySpark, TensorFlow . Strong knowledge of LLM algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing More ❯
Work with the team to support ETL processes What you'll need to succeed Seasoned knowledge of the Azure Databricks platform and associated functionalities Strong Python programming knowledge, ideally Pyspark A logical and analytical approach to problem-solving Awareness of the modern data stack and associated methodologies What you'll get in return A rewarding contract providing exposure to More ❯
Employment Type: Contract
Rate: £500.0 - £650.0 per day + £500 to £650 per day
of ETL Processing/Data Warehousing testing; including Databricks and Data Factory. - Hands-on experience with SQL or Azure SQL. - Experience using automated testing on Python frameworks (Pystest/Pyspark) - Experience with Specflow and other frameworks. If you are interested in this Data Tester role please apply with your most recent CV alterntaviely reach out to me jordan . More ❯
Fabric - UK role (WFH) - 6 months initial contract - Top rates - Outside IR35 Major consultancy urgently requires a Data Engineer with experience of MS Fabric (tech stack is Microsoft Fabric, PySpark/RSpark and Github) for an initial 6 months contract (WFH and Outside IR35) who is passionate about building new capabilities from the ground up and want to help More ❯
and end users to define, test and deliver technical and functional requirements. - You will need experience implementing Master Data Management programmes, this is mandatory for the role. - SQL and PySpark knowledge - Azure Databricks experience. - Experience with Data Querying, and Data Profiling. - Experience working with Large data sets especially analysing and cleansing said data. - Strong Communication Skills. - Experience working within More ❯