Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Data Engineer (Python SparkSQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner … by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience … earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python SparkSQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. More ❯
scalable data pipelines and infrastructure using AWS (Glue, Athena, Redshift, Kinesis, Step Functions, Lake Formation). Utilise PySpark for distributed data processing, ETL, SQL querying, and real-time data streaming. Architect and implement robust data solutions for analytics, reporting, machine learning, and data science initiatives. Establish and enforce … including Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Nine Twenty Recruitment
stakeholders to understand and translate complex data needs Developing and optimising data pipelines with Azure Data Factory and Azure Synapse Analytics Working with Spark notebooks in Microsoft Fabric, using PySpark, SparkSQL, and potentially some Scala Creating effective data models, reports, and dashboards in … understanding of BI platform modernisation) Solid grasp of data warehousing and ETL/ELT principles Strong communication and stakeholder engagement skills Experience with SQL is essential, as the current structure is 90% SQL-based Basic familiarity with Python (we're all at beginner level, but it More ❯
responsible for automating ETL workflows that feed into machine learning features, enhancing the efficiency and accuracy of ML models. Responsibilities: Develop and maintain SQL models and macros using DBT Collaborate with stakeholders to gather and refine modelling requirements Debug and enhance existing data models Design and implement ETL … Azure Databricks and adhere to code-based deployment practices Essential Skills: Over 3 years of experience with Databricks (including Lakehouse, Delta Lake, PySpark, SparkSQL) Strong proficiency in SQL with 5+ years of experience Extensive experience with Azure Data Factory Proficiency in Python programming More ❯
tools to manage the platform, ensuring resilience and optimal performance are maintained. Data Integration and Transformation Integrate and transform data from multiple organisational SQL databases and SaaS applications using end-to-end dependency-based data pipelines, to establish an enterprise source of truth. Create ETL and ELT processes … using Azure Databricks, ensuring audit-ready financial data pipelines and secure data exchange with Databricks Delta Sharing and SQL Warehouse endpoints. Governance and Compliance Ensure compliance with information security standards in our highly regulated financial landscape by implementing Databricks Unity Catalog for governance, data quality monitoring, and ADLS … architecture. Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog More ❯
london, south east england, united kingdom Hybrid / WFH Options
DATAHEAD
availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such as PySpark … Python (with Pandas if no PySpark), T-SQL, and SparkSQL. Strong understanding of data modeling, ETL processes, and data warehousing concepts. Knowledge of CI/CD pipelines and version control (e.g., Git). Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to manage multiple More ❯