Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Data Engineer (Python SparkSQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
role: This role sits within the Group Enterprise Systems (GES) Technology team. The right candidate would be an experienced Microsoft data warehouse developer (SQL Server, SSIS, SSAS) who can work both independently and as a member of a team to deliver enterprise-class data warehouse solutions and analytics … data models. Build and maintain automated pipelines to support data solutions across BI and analytics use cases. Work with Enterprise-grade technology, primarily SQL Server 2019 and potentially Azure technologies. Build patterns, common ways of working, and standardized data pipelines for DLG to ensure consistency across the organization. … Essential Core Technical Experience: 5 to 10+ years' extensive experience in SQL Server data warehouse or data provisioning architectures. Advanced SQL query writing & SQL procedure experience. Experience developing ETL solutions in SQL Server including SSIS & T-SQL. Experience in Microsoft BI technologies More ❯
Central London, London, United Kingdom Hybrid / WFH Options
167 Solutions Ltd
engineers to develop scalable solutions that enhance data accessibility and efficiency across the organisation. Key Responsibilities Design, build, and maintain data pipelines using SQL, Python, and Spark . Develop and manage data warehouse and lakehouse solutions for analytics, reporting, and machine learning. Implement ETL/ELT … 6+ years of experience in data engineering within large-scale digital environments. Strong programming skills in Python, SQL, and Spark (SparkSQL) . Expertise in Snowflake and modern data architectures. Experience designing and managing data pipelines, ETL, and ELT workflows . Knowledge of AWS services such as More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Nine Twenty Recruitment
stakeholders to understand and translate complex data needs Developing and optimising data pipelines with Azure Data Factory and Azure Synapse Analytics Working with Spark notebooks in Microsoft Fabric, using PySpark, SparkSQL, and potentially some Scala Creating effective data models, reports, and dashboards in … understanding of BI platform modernisation) Solid grasp of data warehousing and ETL/ELT principles Strong communication and stakeholder engagement skills Experience with SQL is essential, as the current structure is 90% SQL-based Basic familiarity with Python (we're all at beginner level, but it More ❯
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
london, south east england, united kingdom Hybrid / WFH Options
DATAHEAD
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL) Modelling & Statistical Analysis experience, ideally customer related Coding skills in at least one of Python, R, Scala, C, Java or JS Track record of using … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Your qualifications and experience You are a pro at using SQL for data manipulation (at least one of PostgreSQL, MSSQL, Google BigQuery, SparkSQL). Modelling & Statistical Analysis experience, ideally customer related. Coding skills in at least one of Python, R, Scala, C, Java or JS. Track record of … in one or more programming languages. Keen interest in some of the following areas: Big Data Analytics (e.g. Google BigQuery/BigTable, ApacheSpark), Parallel Computing (e.g. ApacheSpark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing More ❯
Ruddington, Nottinghamshire, United Kingdom Hybrid / WFH Options
Experian Group
with AWS and knowledge of IaaS (e.g., Terraform, CDK). Proficiency in SQL for big data (Presto/HiveQL/BigQuery/SparkSQL). Familiarity with data transformation (batch & streaming) using DBT. Knowledge of shell scripting. Additional Information You will get: Personal Development - career pathway for professional growth More ❯