of) Azure Databricks, Data Factory, Storage, Key Vault Experience with source control systems, such as Git dbt (Data Build Tool) for transforming and modelling data SQL (Spark SQL) & Python (PySpark) Certifications (Ideal) SAFe POPM or Scrum PSPO Microsoft Certified: Azure Fundamentals (AZ-900) Microsoft Certified: Azure Data Fundamentals (DP-900) What's in it for you Skipton values work More ❯
for backend data engineering tasks. • Strong knowledge of relational databases and experience working with flat files. • Familiarity with SQL-like query languages for data manipulation and analysis. • Experience with PySpark or JafaSpark (preferred) for distributed data processing. • Strong problem-solving skills and the ability to optimize processes for efficiency and scalability. • Excellent communication skills and the ability to collaborate More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
MYO Talent
Engineer/Data Engineering role Large and complex datasets Azure, Azure Databricks Microsoft SQL Server Lakehouse, Delta Lake Data Warehousing ETL CDC Stream Processing Database Design ML Python/PySpark Azure Blob Storage Parquet Azure Data Factory Desirable: Any exposure working in a software house, consultancy, retail or retail automotive sector would be beneficial but not essential. More ❯
Bristol, Somerset, United Kingdom Hybrid / WFH Options
Adecco
approaches Experience with data ingestion and ETL pipelines Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to (see below) - I'd love More ❯
experience blending data engineering and data science approaches Curious, adaptable, and a natural problem solver Bonus points for: Experience in financial services, insurance, or reinsurance Familiarity with Databricks, Git, PySpark or SQL Exposure to cyber risk or large-scale modelling environments Ready to Apply for this exciting Data Scientist role? Send your CV to - I'd love to hear More ❯
you. Key Responsibilities: - Design and build high-scale systems and services to support data infrastructure and production systems. - Develop and maintain data processing pipelines using technologies such as Airflow, PySpark and Databricks. - Implement dockerized high-performance microservices and manage their deployment. - Monitor and debug backend systems and data pipelines to identify and resolve bottlenecks and failures. - Work collaboratively with More ❯
on assessing and delivering robust data solutions and managing changes that impact diverse stakeholder groups in response to regulatory rulemaking, supervisory requirements, and discretionary transformation programs. Key Responsibilities: Develop PySpark and SQL queries to analyze, reconcile, and interrogate data. Provide actionable recommendations to improve reporting processes—e.g., enhancing data quality, streamlining workflows, and optimizing query performance. Contribute to architecture More ❯
strategic leader with deep experience and a hands-on approach. You bring: A track record of scaling and leading data engineering initiatives Excellent coding skills (e.g. Python, Java, Spark, PySpark, Scala) Strong AWS expertise and cloud-based data processing Advanced SQL/database skills Delivery management and mentoring abilities Highly Desirable: Familiarity with tools like AWS Glue, Azure Data More ❯
as Python (preferred) and C++ Experience working with structured and unstructured data (e.g., text, PDFs, images, call recordings, video) Proficiency in database and big data technologies including SQL, NoSQL, PySpark, Hive, etc. Cloud & AI Ecosystems Experience working with cloud platforms such as AWS, GCP, or Azure Understanding of API integration and deploying solutions in cloud environments Familiarity or hands More ❯
Data Factory or equivalent cloud ETL tools, with experience building scalable, maintainable pipelines is essential. Extensive experience as a senior data or integrations engineer. Hands on experience on Python, Pyspark or Spark in an IDE. Data Bricks highly preferred. Proven track record in complex Data Engineering environments, including data integration and orchestration. Experience integrating external systems via REST APIs More ❯
engineering and Azure cloud data technologies. You must be confident working across: Azure Data Services, including: Azure Data Factory Azure Synapse Analytics Azure Databricks Microsoft Fabric (desirable) Python and PySpark for data engineering, transformation, and automation ETL/ELT pipelines across diverse structured and unstructured data sources Data lakehouse and data warehouse architecture design Power BI for enterprise-grade More ❯
business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and how to fine-tune More ❯
Atherstone, Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Aldi Stores
end-to-end ownership of demand delivery Provide technical guidance for team members Providing 2nd or 3rd level technical support About You Experience using SQL, SQL Server DB, Python & PySpark Experience using Azure Data Factory Experience using Data Bricks and Cloudsmith Data Warehousing Experience Project Management Experience The ability to interact with the operational business and other departments, translating More ❯
with cross-functional teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
work wimulti-functionalnal teams, including technical and non-technical stakeholders Passion for learning new skills and staying up-to-date with ML algorithms Bonus points Experience with Databricks and PySpark Experience with deep learning & large language models Experience with traditional, semantic, and hybrid search frameworks (e.g. Elasticsearch) Experience working with AWS or another cloud platform (GCP/Azure) Additional More ❯
classifiers, deep learning, or large language models Experience with experiment design and conducting A/B tests Experience building shared or platform-style ML systems Experience with Databricks and PySpark Experience working with AWS or another cloud platform (GCP/Azure) Additional Information Health + Mental Wellbeing PMI and cash plan healthcare access with Bupa Subsidised counselling and coaching More ❯
Leeds, West Yorkshire, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
architecture Ensuring best practices in data governance, security, and performance tuning Requirements: Proven experience with Azure Data Services (ADF, Synapse, Data Lake) Strong hands-on experience with Databricks (including PySpark or SQL) Solid SQL skills and understanding of data modelling and ETL/ELT processes Familiarity with Delta Lake and lakehouse architecture A proactive, collaborative approach to problem-solving More ❯
experience with Databricks (Delta Lake, Unity Catalog, Lakehouse architecture). Strong knowledge of Azure services (e.g. Data Lake, Data Factory, Synapse). Solid hands-on skills in Spark, Python, PySpark, and SQL. Understanding of data modelling, governance, and BI integration. Familiarity with CI/CD, Git, and Infrastructure as Code (e.g. Terraform). Excellent communication and mentoring skills. Desirable More ❯
Secret clearance is required for this position. Qualifications: To be considered for this role, you must have: Active Secret clearance or higher (required). Strong proficiency in Python, SQL, PySpark, and Java. At least 5 years of hands-on experience in pipeline engineering or data engineering roles. Demonstrated ability to perform root cause analysis and resolve production issues. Proven More ❯
instructions. Location is Herndon or Chantilly (both locations are in close proximity to exits of Rt 28). Required Skills: Extract, Transform and Load (ETL) tools and processes Python, Pyspark, Pytorch AWS SQL APIs Linux Geospatial tools/data Desired Skills: Agile experience delivering on agile teams (Participates in scrum and PI Planning) Docker, Jenkins, Hadoop/Spark, Kibana More ❯
Strong analytical and troubleshooting skills. Desirable Skills Familiarity with state management libraries (MobX, Redux). Exposure to financial data or market analytics projects. Experience with data engineering tools (DuckDB, PySpark, etc.). Knowledge of automated testing frameworks (Playwright, Cypress). Experience of WebAssembly. Python programming experience for data manipulation or API development. Use of AI for creating visualisations. Soft More ❯
technologies, particularly Azure, this role represents a great next step in your data engineering career. The successful candidate will possess the following essential skills: Strong proficiency in Python or PySpark Data Engineering experience, ideally with an Azure background Significant experience with SQL (preferably SQL Server) Excellent communication skills capable of interacting with stakeholders of varying seniority It would be More ❯