AWS Data Engineer London, UK Permanent Strong experience in Python, PySpark, AWS S3, AWS Glue, Databricks, Amazon Redshift, DynamoDB, CI/CD and Terraform. Total 7 + years of experience in Data engineering is required. Design, develop, and optimize ETL pipelines using AWS Glue, Amazon EMR and Kinesis for real-time and batch data processing. Implement data transformation, streaming More ❯
development) Strong experience with CI/CD tools and pipelines for data science Solid understanding of AWS services (e.g. EC2, S3, Lambda, Glue) and CDK Proficient in Python and PySpark; SQL fluency Experience with MLflow or other model lifecycle tools Effective communicator and trainer - able to help others upskill Comfortable building internal tools and documentation Nice to Have: Experience More ❯
processes and provide training on data tools and workflows. Skills and experience • Experience in building ELT/ETL pipelines and managing data workflows. • Proficiency in programming languages such as PySPark, Python, SQL, or Scala. • Solid understanding of data modelling and relational database concepts. • Knowledge of GDPR and UK data protection regulations. Preferred Skills: • Experience with Power BI for data More ❯
of) Azure Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSPO Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What's in it for you Skipton values work More ❯
the ability to write ad-hoc and complex queries to perform data analysis. Experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. You will be able to develop solutions in a hybrid data environment (on-Prem and Cloud). Hands on experience with developing data pipelines for structured, semi-structured More ❯
Luton, Bedfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
practice Essential Experience: Proven expertise in building data warehouses and ensuring data quality on GCP Strong hands-on experience with BigQuery, Dataproc, Dataform, Composer, Pub/Sub Skilled in PySpark, Python and SQL Solid understanding of ETL/ELT processes Clear communication skills and ability to document processes effectively Desirable Skills: GCP Professional Data Engineer certification Exposure to Agentic More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Internetwork Expert
warehousing ETL/ELT pipelines Change Data Capture (CDC) and change tracking Stream processing Database design Machine Learning and AI integration Hands-on experience with: Azure Databricks Python/PySpark Microsoft SQL Server Azure Blob Storage Parquet file formats Azure Data Factory Proven experience building secure, scalable, and high-performing data pipelines. Ability to solve complex technical problems and More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail or retail automotive sector would be beneficial but not essential. More ❯
Bristol, Somerset, United Kingdom Hybrid / WFH Options
Adecco
approaches Experience with data ingestion and ETL pipelines Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to (see below) - I'd love More ❯
experience blending data engineering and data science approaches Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to - I'd love to hear More ❯
you. Key Responsibilities: - Design and build high-scale systems and services to support data infrastructure and production systems. - Develop and maintain data processing pipelines using technologies such as Airflow, PySpark and Databricks. - Implement dockerized high-performance microservices and manage their deployment. - Monitor and debug backend systems and data pipelines to identify and resolve bottlenecks and failures. - Work collaboratively with More ❯
on assessing and delivering robust data solutions and managing changes that impact diverse stakeholder groups in response to regulatory rulemaking, supervisory requirements, and discretionary transformation programs. Key Responsibilities: Develop PySpark and SQL queries to analyze, reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture More ❯
Belfast, County Antrim, Northern Ireland, United Kingdom
Hays
strategic change projects. You'll work across multiple workstreams, delivering high-impact data solutions that drive efficiency and compliance for Markets and its clients. Key Responsibilities Build and optimize PySpark and SQL queries to analyze, reconcile, and interrogate large datasets. Recommend improvements to reporting processes, data quality, and query performance. Contribute to the architecture and design of Hadoop environments. More ❯
strategic leader with deep experience and a hands-on approach. You bring: A track record of scaling and leading data engineering initiatives Excellent coding skills (e.g. Python, Java, Spark, PySpark, Scala) Strong AWS expertise and cloud-based data processing Advanced SQL/database skills Delivery management and mentoring abilities Highly Desirable: Familiarity with tools like AWS Glue, Azure Data More ❯
as Python (preferred) and C++ Experience working with structured and unstructured data (e.g., text, PDFs, images, call recordings, video) Proficiency in database and big data technologies including SQL, NoSQL, PySpark, Hive, etc. Cloud & AI Ecosystems Experience working with cloud platforms such as AWS, GCP, or Azure Understanding of API integration and deploying solutions in cloud environments Familiarity or hands More ❯
Data Factory or equivalent cloud ETL tools, with experience building scalable, maintainable pipelines is essential. Extensive experience as a senior data or integrations engineer. Hands on experience on Python, Pyspark or Spark in an IDE. Data Bricks highly preferred. Proven track record in complex Data Engineering environments, including data integration and orchestration. Experience integrating external systems via REST APIs More ❯
business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how to fine-tune More ❯
extensive Data Development experience in a commercial or Agile environment. To be successful in this role its essential that you will: Have experience of SQL, Python, AWS, Git and PySpark Desirable experience needed will be: SISS or SAS experience Quality Assurance and Test Automation experience Experience of Database technologies Experience in Financial Services organisation About us Were one of More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
with cross-functional teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
classifiers, deep learning, or large language models Experience with experiment design and conducting A/B tests Experience building shared or platform-style ML systems Experience with Databricks and PySpark Experience working with AWS or another cloud platform (GCP/Azure) Additional Information Health + Mental Wellbeing PMI and cash plan healthcare access with Bupa Subsidised counselling and coaching More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem-solving More ❯