/ELT pipelines. Strong proficiency in SQL and Python, with experience using modern data transformation frameworks such as ADF. Experience working with structured and semi-structured data (e.g. JSON, Parquet, CSV, API sources). Proven ability to design and deliver reporting solutions using Power BI or equivalent tools, with a sound understanding of semantic modelling and KPI frameworks. Familiarity More ❯
Azure Data Services, Databricks, Power BI, SQL DW, Snowflake, Big Query, and Advanced Analytics. Proven ability to understand low-level data engineering solutions and languages (Spark, MPP, Python, Delta, Parquet). Experience with Azure DevOps & CICD processes, software development lifecycle including infrastructure as code (Terraform). Understand data warehousing concepts, including dimensional modelling, star schema, data aggregation, and best More ❯
services, especially Glue, Athena, Lambda, and S3 . Proficient in Python (ideally PySpark) and modular SQL for transformations and orchestration. Solid grasp of data modeling (partitioning, file formats like Parquet, etc.). Comfort with CI/CD, version control, and infrastructure-as-code tools. If this sends like you then send your CV More ❯
on experience with AWS data services - particularly Glue, Athena, S3, Lambda, and Step Functions. Solid understanding of data modelling principles for analytics - including partitioning, denormalisation, and file formats (e.g., Parquet, ORC). Experience building and maintaining production-grade ETL pipelines with an emphasis on performance, quality, and maintainability. AWS certification desirable - Data Engineer or similar. #J-18808-Ljbffr More ❯
data models. Expertise in implementing Data Lake/Big Data projects in Cloud MS Azure and/or On-premise platforms. Experience with designing and building lakehouse architectures in Parquet/Delta and Synapse Serverless or Databricks SQL. Working experience with DevOps frameworks and a strong understanding of Software Development Lifecycle. Experience in performing root cause analysis on data More ❯
on experience with AWS data services - particularly Glue, Athena, S3, Lambda, and Step Functions. Solid understanding of data modelling principles for analytics - including partitioning, denormalisation, and file formats (e.g., Parquet, ORC). Experience building and maintaining production-grade ETL pipelines with an emphasis on performance, quality, and maintainability. AWS certification desirable - Data Engineer or similar. Seniority level Seniority level More ❯
Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
All our office locations considered: Newbury & Liverpool (UK); Šibenik, Croatia (considered) We're on the hunt for builders . No, we've not ventured into construction in our quest to conquer the world, rather a designer and builder of systems More ❯
Liverpool, England, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
All our office locations considered: Newbury & Liverpool (UK); Šibenik, Croatia (considered) We're on the hunt for builders . No, we've not ventured into construction in our quest to conquer the world, rather a designer and builder of systems More ❯
TJX Companies At TJX Companies, every day brings new opportunities for growth, exploration, and achievement. You'll be part of our vibrant team that embraces diversity, fosters collaboration, and prioritizes your development. Whether you're working in our four global More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
Expert knowledge of the Microsoft Fabric Analytics Platform (Azure SQL, Synapse, PowerBI). • Proficient in Python for data engineering tasks, including data ingestion from APIs, creation and management of Parquet files, and execution of ML models. • Strong SQL skills, enabling support for Data Analysts with efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL More ❯
South East London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working with large, complex datasets and APIs across formats (CSV, JSON, Parquet, etc) Familiarity with workflow automation tools (eg, Power Automate) and/or Power Apps is desirable Excellent interpersonal and communication skills with the ability to work cross-functionally and More ❯
Cloud implementation experience with AWS including: AWS Data Services: Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda + Step Functions + Event Bridge, ECS Data De/Serialization: Parquet and JSON format AWS Data Security: Good Understanding of security concepts such as: IAM, Service roles, Encryption, KMS, Secrets Manager Practical exposure with Infrastructure as Code (IaC) solutions such More ❯
Cloud implementation experience with AWS including: AWS Data Services: Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda + Step Functions + Event Bridge, ECS Data De/Serialization: Parquet and JSON format AWS Data Security: Good Understanding of security concepts such as: IAM, Service roles, Encryption, KMS, Secrets Manager Demonstrated knowledge of software applications and technical processes within More ❯
Cloud implementation experience with AWS including: AWS Data Services: Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda + Step Functions + Event Bridge, ECS Data De/Serialization: Parquet and JSON format AWS Data Security: Good Understanding of security concepts such as IAM, Service roles, Encryption, KMS, Secrets Manager Demonstrated knowledge of software applications and technical processes within More ❯
management systems . Analyse and cleanse data using a range of tools and techniques. Manage and process structured and semi-structured data formats such as JSON, XML, CSV, and Parquet . Operate effectively in Linux and cloud-based environments . Support CI/CD processes and adopt infrastructure-as-code principles. Contribute to a collaborative, knowledge-sharing team culture. More ❯
that collect, process, and store large volumes of structured and unstructured data. Data Lake Optimisation and Querying: Develop and optimise external tables within Azure Synapse’s SQL pool, leveraging parquet-based data storage in the data lake for efficient querying and seamless data access. Data Quality and Governance: Implement data quality checks and data management procedures while ensuring that More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
E.ON
Next loves: Languages: Python, SQL & Spark (PySpark, SparkSQL) Tools: Git (GitHub/GitLab) DataBricks, AWS services (inc. Athena/Glue, S3, SageMaker, Lambda, Transcribe, EC2) & Tableau Datastores: Postgres, DataBricks, Parquet/Delta Here's what else you need to know: Role may close earlier due to high applications. Competitive salary Location - London - E.ON Next, 47-53 Charterhouse Street, Farringdon More ❯
management systems . Analyze and cleanse data using a range of tools and techniques. Manage and process structured and semi-structured data formats such as JSON, XML, CSV, and Parquet . Operate effectively in Linux and cloud-based environments . Support CI/CD processes and adopt infrastructure-as-code principles. Contribute to a collaborative, knowledge-sharing team culture. More ❯
management systems . Analyse and cleanse data using a range of tools and techniques. Manage and process structured and semi-structured data formats such as JSON, XML, CSV, and Parquet . Operate effectively in Linux and cloud-based environments . Support CI/CD processes and adopt infrastructure-as-code principles. Contribute to a collaborative, knowledge-sharing team culture. More ❯
management systems . Analyse and cleanse data using a range of tools and techniques. Manage and process structured and semi-structured data formats such as JSON, XML, CSV, and Parquet . Operate effectively in Linux and cloud-based environments . Support CI/CD processes and adopt infrastructure-as-code principles. Contribute to a collaborative, knowledge-sharing team culture. More ❯
Cloud implementation experience with AWS including: AWS Data Services: Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda + Step Functions + Event Bridge, ECS Data De/Serialization: Parquet and JSON format AWS Data Security: Good Understanding of security concepts such as: IAM, Service roles, Encryption, KMS, Secrets Manager Demonstrated knowledge of software applications and technical processes within More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
E.ON
Next loves: Languages: Python, SQL & Spark (PySpark, SparkSQL) Tools: Git (GitHub/GitLab) DataBricks, AWS services (inc. Athena/Glue, S3, SageMaker, Lambda, Transcribe, EC2) & Tableau Datastores: Postgres, DataBricks, Parquet/Delta Here's what else you need to know: Role may close earlier due to high applications. Competitive salary Location - London - E.ON Next, 47-53 Charterhouse Street, Farringdon More ❯
Basingstoke, England, United Kingdom Hybrid / WFH Options
Castle Trust Group
equivalent) Python for scripting, data manipulation or automation tasks Experience with data comparison and synchronisation tools e.g.Redgate Structured and semi-structured data formats, including JSON, XML, Hive tables, and Parquet files Cloud platforms (e.g., Azure, AWS, GCP), including deployment, configuration, and integration of database services Data warehousing, including dimensional modelling and ETL processes Machine learning pipeline awareness Experience with More ❯
Cardiff, Wales, United Kingdom Hybrid / WFH Options
Identify Solutions
Cloud and big data technologies (e.g. Spark/Databricks/Delta Lake/BigQuery). Familiarity with eventing technologies (e.g. Event Hubs/Kafka) and file formats such as Parquet/Delta/Iceberg. Want to learn more? Get in touch for an informal chat. More ❯
Bristol, England, United Kingdom Hybrid / WFH Options
MBDA Missile Systems
various exchange and processing techniques (ETL, ESB, API). Lead the way in delivering Agile methodologies for successful and timely project delivery. Leverage strong database skills (SQL, NoSQL, and Parquet) for efficient data storage and management. What we're looking for from you: Proficiency in Data Science techniques, including statistical models and ML algorithms. Expertise in NLP, with a … keen understanding of LLM and RAG technologies. Strong development capabilities, particularly in Python. Experience with data exchange, processing, and storage frameworks (ETL, ESB, API, SQL, NoSQL, and Parquet). Comfort with Agile development methodologies. Excellent teamwork and communication skills, with a talent for translating technical concepts into actionable insights for non-specialists. Ability to influence company decision-makers and More ❯