position for a data engineering (data solutions) team in a large and diverse organisation and involvement in the full development lifecycle across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for More ❯
deploy software to production and monitor releases using appropriate tools. Strong proficiency in designing applications and integrating services such as Azure Function Apps, Azure Service Bus, Azure Logic Apps, Databricks, Cosmos DB, and SQL Server relational databases. Hands-on experience with containerisation and orchestration using Docker and Azure Kubernetes Service (AKS), ensuring efficient and scalable application deployment. About SSE SSE More ❯
to a wide variety of audiences. For candidates applying for the Senior Consultant role, we additionally require: Working experience with at least one Cloud Platform (AWS, Azure, GCP, Snowflake, Databricks etc.) and exposure to Cloud Architecture principles. Demonstrated experience in people management, product owner or workstream management. Experience supporting and participating in the commercial cycle, including defining project scope and More ❯
banking client in various data transformation activities. Key Responsibilities: Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and Databricks Developing and deploying ETL jobs that extract data from various sources, transforming them to meet business needs. Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing … to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Collaborate seamlessly across diverse technical stacks, including Databricks, Snowflake, etc. Developing various components in Python as part of a unified data pipeline framework. Contributing towards the establishment of best practices for the optimal and efficient usage of data … years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar cloud-based data warehousing solutions 3+ years of experience in data development and More ❯