Collaborate with cross-functional teams to translate business needs into technical solutions. Core Skills Cloud & Platforms : Azure, AWS, SAP Data Engineering : ELT, Data Modeling, Integration, Processing Tech Stack : Databricks (PySpark, Unity Catalog, DLT, Streaming), ADF, SQL, Python, Qlik DevOps : GitHub Actions, Azure DevOps, CI/CD pipelines Please click here to find out more about our Key Information Documents. More ❯
Reading, Berkshire, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
tools to build robust data pipelines and applications that process complex datasets from multiple operational systems.Key Responsibilities: Build and maintain AWS-based ETL/ELT pipelines using S3, Glue (PySpark/Python), Lambda, Athena, Redshift, and Step Functions Develop backend applications to automate and support compliance reporting Process and validate complex data formats including nested JSON, XML, and CSV More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
IR35 Start Date: ASAP Key Skills Required Azure Data Factory Azure Functions SQL Python Desirables- Experience Copilot or Copilot Studio Experience designing, developing, and deploying AI solutions Familiarity with PySpark, PyTorch, or other ML frameworks Exposure to M365, D365, and low-code/no-code Azure AI tools If interetsed please send a copy of your most recent CV More ❯
will be on enabling analytics and machine learning at scale, using best-in-class tools across the Azure stack. Your responsibilities will include: Designing and deploying ETL pipelines using PySpark and Delta Lake on Databricks. Supporting the deployment and operationalisation of ML models with MLflow and Databricks Workflows. Building out reusable data products and feature stores for data science … and cloud-native tools. Collaborating with data scientists, analysts, and product teams to improve data usability and performance. KEY SKILLS AND REQUIREMENTS Advanced experience with Databricks , Delta Lake , and PySpark . Strong background in data engineering and distributed processing . Hands-on knowledge of Azure Data Lake , Data Factory , or similar orchestration tools. Experience with ML model deployment , preferably More ❯