DevOps best practices. Collaborate with BAs on source-to-target mapping and build new data model components. Participate in Agile ceremonies (stand-ups, backlog refinement, etc.). Essential Skills: PySpark and SparkSQL. Strong knowledge of relational database modelling Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes). Azure platform experience. ADF or Synapse pipelines for orchestration. PythonMore ❯
technical stakeholders. Business acumen with a focus on delivering data-driven value. Strong understanding of risk, controls, and compliance in data management. Technical Skills: Hands-on experience with Python, PySpark, and SQL . Experience with AWS (preferred). Knowledge of data warehousing (DW) concepts and ETL processes. Familiarity with DevOps principles and secure coding practices. Experience: Proven track record More ❯
setting up CI/CD pipelines (Azure DevOps, GitHub Actions, or GitLab CI targeting Azure) Good understanding of data in software applications, including experience with: Data libraries (Pandas, NumPy, PySpark) Data-driven solutions with MongoDB Building features that rely on performance monitoring, analytics, or metrics Start Date: ASAP Length: Initial 6 months Location: UK Remote IR 35 Status: Outside More ❯