deploy databases and data stores to support organizational requirements. Skills/Qualifications: 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar More ❯
experience with the ability to write ad-hoc and complex queries to perform data analysis. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). Hands on experience with developing data pipelines for structured, semi More ❯
approaches to delivery. You must be able to articulate ideas effectively and strive to constantly improve deliverables. Demonstrable depth of knowledge working with Python Experience working with data, e.g. Pandas, SQL A proactive approach to continuous learning and improvement, including the use of AI tools to support development tasks Mentoring from experienced colleagues and access to training courses will help More ❯
and distribution of electrical energy) to secondary education minimum. - The ability to code in Python working with large datasets as a minimum. Familiarity with standard Python packages (NumPy/Pandas/Scikit-learn etc.) and some knowledge of VBA and SQL would also be preferable. - Competent user of specific Azure data related resources (or similar) including: Databricks and Storage Account More ❯