Good experience with ETL - SSIS, SSRS, T-SQL (On-prem/Cloud) Strong proficiency in SQL and Python for handling complex data problems Hands-on experience with Apache Spark (PySpark or Spark SQL) Experience with the Azure data stack Knowledge of workflow orchestration tools like Apache Airflow Experience with containerisation technologies like Docker Proficiency in dimensional modelling techniques Experience More ❯
and data stores to support organizational requirements. Skills/Qualifications: 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines 3+ years of proficiency in working with Snowflake or similar cloud More ❯
Databricks, Azure Data Lake Storage, Delta Lake, Azure SQL, Purview and APIM. Proficiency in developing CI/CD data pipelines and strong programming skills in Python, SQL, Bash, and PySpark for automation. Strong aptitude for data pipeline monitoring and an understanding of data security practices such as RBAC and encryption. Implemented data and pipeline observability dashboards, ensuring high data More ❯
both greenfield initiatives and enhancing high-traffic financial applications. Key Skills & Experience: Strong hands-on experience with Databricks , Delta Lake , Spark Structured Streaming , and Unity Catalog Advanced Python/PySpark and big data pipeline development Familiar with event streaming tools ( Kafka , Azure Event Hubs ) Solid understanding of SQL , data modelling , and lakehouse architecture Experience deploying via CI/CD More ❯
business requirements into data solutions Monitor and improve pipeline performance and reliability Maintain documentation of systems, workflows, and configs Tech environment Python, SQL/PLSQL (MS SQL + Oracle), PySpark Apache Airflow (MWAA), AWS Glue, Athena AWS services (CDK, S3, data lake architectures) Git, JIRA You should apply if you have: Strong Python and SQL skills Proven experience designing More ❯
Collaborate with cross-functional teams to translate business needs into technical solutions. Core Skills Cloud & Platforms : Azure, AWS, SAP Data Engineering : ELT, Data Modeling, Integration, Processing Tech Stack : Databricks (PySpark, Unity Catalog, DLT, Streaming), ADF, SQL, Python, Qlik DevOps : GitHub Actions, Azure DevOps, CI/CD pipelines Please click here to find out more about our Key Information Documents. More ❯
joining data from various sources. About the role The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in PythonPySpark and SQL. You will have expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address More ❯
Bedford, Bedfordshire, England, United Kingdom Hybrid / WFH Options
Reed Talent Solutions
data tooling such as Synapse Analytics, Microsoft Fabric, Azure Data Lake Storage/One Lake, and Azure Data Factory. Understanding of data extraction from vendor REST APIs. Spark/Pyspark or Python skills a bonus or a willingness to develop these skills. Experience with monitoring and failure recovery in data pipelines. Excellent problem-solving skills and attention to detail. More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
across varied solutions. - Extensive experience of using the Databricks platform for developing and deploying data solutions/data products (including ingestion, transformation and modelling) with high proficiency in Python, PySpark and SQL. - Leadership experience in other facets necessary for solution development such as testing, the wider scope of quality assurance, CI/CD etc. - Experience in related areas of More ❯
of) Azure Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSPO Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What's in it for you Skipton values work More ❯
the ability to write ad-hoc and complex queries to perform data analysis. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). Hands on experience with developing data pipelines for structured, semi-structured More ❯
Luton, Bedfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail or retail automotive sector would be beneficial but not essential. More ❯
Bristol, Somerset, United Kingdom Hybrid / WFH Options
Adecco
approaches Experience with data ingestion and ETL pipelines Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to (see below) - I'd love More ❯
experience blending data engineering and data science approaches Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to - I'd love to hear More ❯
on assessing and delivering robust data solutions and managing changes that impact diverse stakeholder groups in response to regulatory rulemaking, supervisory requirements, and discretionary transformation programs. Key Responsibilities: Develop PySpark and SQL queries to analyze, reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
strategic change projects. You'll work across multiple workstreams, delivering high-impact data solutions that drive efficiency and compliance for Markets and its clients. Key Responsibilities Build and optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. More ❯
strategic leader with deep experience and a hands-on approach. You bring: A track record of scaling and leading data engineering initiatives Excellent coding skills (e.g. Python, Java, Spark, PySpark, Scala) Strong AWS expertise and cloud-based data processing Advanced SQL/database skills Delivery management and mentoring abilities Highly Desirable: Familiarity with tools like AWS Glue, Azure Data More ❯
extensive Data Development experience in a commercial or Agile environment. To be successful in this role its essential that you will: Have experience of SQL, Python, AWS, Git and PySpark Desirable experience needed will be: SISS or SAS experience Quality Assurance and Test Automation experience Experience of Database technologies Experience in Financial Services organisation About us Were one of More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem-solving More ❯
Experience: Strong SQL skills and experience with SSIS, SSRS, SSAS Data warehousing, ETL processes, best practice data management Azure cloud technologies (Synapse, Databricks, Data Factory, Power BI) Python/PySpark Proven ability to work in hybrid data environments Ability to manage and lead on and offshore teams Exceptional stakeholder management and communication skills with the ability to talk to More ❯
change, including optimisation of production processes Great knowledge of Python and in particular, the classic Python Data Science stack (NumPy, pandas, PyTorch, scikit-learn, etc) is required; Familiarity with PySpark is also desirable. A cloud platform experience (e.g Azure, AWS, GCP), were using Azure in the team. Good SQL understanding in practice Capacity and enthusiasm for coaching and mentoring More ❯