ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
Luton, England, United Kingdom Hybrid / WFH Options
easyJet
Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt More ❯
communication skills We're excited if you have 7+ years of experience delivering multi tier, highly scalable, distributed web applications Experience working with Distributed computing frameworks knowledge: Hive/Hadoop, Apache Spark, Kafka, Airflow Working with programming languages Python , Java, SQL. Working on building ETL (Extraction Transformation and Loading) solution using PySpark Experience in SQL/NoSQL database design More ❯
in data engineering, with a strong emphasis on data design and architecture. Proven proficiency in SQL and experience with relational databases. Practical experience with big data technologies such as Hadoop or Spark. In-depth understanding of data warehousing concepts and ETL frameworks. Familiarity with cloud platforms including AWS, Azure, or GCP. Strong analytical and problem-solving skills, with the More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
problem-solving skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
CHEP UK Ltd
such as Python, R, and SQL for data analysis and model development. Experience working with cloud computing platforms including AWS and Azure, and familiarity with distributed computing frameworks like Hadoop and Spark. Deep understanding of supply chain operations and the ability to apply data science methods to solve real-world business problems effectively. Strong foundational knowledge in mathematics and More ❯
Flows, Conduct>It, Express>It, Metadata Hub, and PDL. Hands-on experience with SQL , Unix/Linux shell scripting , and data warehouse concepts . Familiarity with big data ecosystems (Hadoop, Hive, Spark) and cloud platforms (AWS, Azure, GCP) is a plus. Proven ability to troubleshoot complex ETL jobs and resolve performance issues. Experience working with large-scale datasets and More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
NLB Services
tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing · Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. · Strong communication More ❯
tools and libraries (e.g., Power BI) · Background in database administration or performance tuning · Familiarity with data orchestration tools, such as Apache Airflow · Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing · Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. · Strong communication More ❯
Birmingham, West Midlands, England, United Kingdom
TXP
for data engineering. A detail-oriented mindset and strong problem-solving skills. Degree in Computer Science , Engineering , or a related field. Bonus Skills: Experience with big data tools (e.g., Hadoop , Spark ). Exposure to machine learning workflows . Understanding of prompt engineering concepts. Benefits: 25 days annual leave (plus bank holidays). An additional day of paid leave for More ❯
diverse data sources into unified platforms. Monitor, troubleshoot , and enhance data performance and infrastructure. Key Skills & Experience: Strong experience with SQL/NoSQL databases, data warehousing, and big data (Hadoop, Spark). Proficient in Python, Java, or Scala with solid OOP and design pattern understanding. Expertise in ETL tools and orchestration frameworks (Airflow, Apache NiFi). Hands-on experience More ❯
data visualization tools (e.g., Power BI). Experience in database administration or performance tuning. Knowledge of data orchestration tools like Apache Airflow. Exposure to big data technologies such as Hadoop or Spark. Why Join Synechron? Be part of a dynamic, innovative team driving digital transformation in the financial sector. We offer competitive compensation, opportunities for professional growth, and a More ❯
AWS data platforms and their respective data services. Solid understanding of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies (e.g., Spark, Hadoop) and distributed computing. Proficiency in SQL and at least one programming language (e.g., Python, Java) 6 Month Contract Inside IR35 Immediately available London up to 2 times a month More ❯
e.g. MS SQL, Oracle) NoSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J) Data exchange and processing skills (e.g. ETL, ESB, API) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
a data science team, mentoring junior colleagues and driving technical direction. Experience working with Agile methodologies in a collaborative team setting. Extensive experience with big data tools, such as Hadoop and Spark, for managing and processing large-scale datasets. Extensive experience with cloud platforms, particularly Microsoft Azure, for building and deploying data science solutions. Why Join? You'll be More ❯
ETL/ELT processes. Proficiency in AWS data platforms and services. Solid understanding of data governance principles (data quality, metadata, access control). Familiarity with big data technologies (Spark, Hadoop) and distributed computing. Advanced SQL skills and proficiency in at least one programming language (Python, Java). Additional Requirements Immediate availability for an October start. Must be UK-based More ❯
bring: Significant experience in data engineering, including leading or mentoring technical teams. Deep understanding of cloud environments such as Azure, AWS, or Google Cloud Platform, and tools like Synapse, Hadoop, or Snowflake. Hands-on experience with programming languages such as Python, Java, or Scala. Strong knowledge of data architecture, modelling, and governance. A track record of delivering complex data More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
Provide Environment Management representation in daily scrums, working groups, and ad-hoc meetings. Required Skillsets: Strong skills and experience with data technologies such as IBM DB2, Oracle, MongoDB, Hive, Hadoop, SQL, Informatica, and similar tech stacks. Attention to detail and strong ability to work independently and navigate complex target end state architecture (Tessa). Strong knowledge and experience with More ❯
Stevenage, Hertfordshire, England, United Kingdom Hybrid / WFH Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
technologies, e.g. AWS, Azure. Good knowledge of Linux, it's development environments and tools Have experience in object-oriented methodologies, design patterns Understanding of Big Data technologies such as Hadoop, Spark Understanding of security implications and secure coding Proven grasp of software development lifecycle best-practices, agile methods, and conventions, including Source Code Management, Continuous Integration Practical experience with More ❯
MongoDB, InfluxDB, Neo4J). Familiarity with data exchange and processing methods (e.g. ETL, ESB, API). Proficiency in development languages such as Python. Knowledge of big data technologies (e.g. Hadoop stack). Understanding of NLP (Natural Language Processing) and OCR (Object Character Recognition). Knowledge of Generative AI would be advantageous. Experience in containerisation technologies (e.g. Docker) would be More ❯
Stevenage, England, United Kingdom Hybrid / WFH Options
Anson McCade
with noSQL databases (e.g. MongoDB, InfluxDB, Neo4J). • Strong data exchange and processing expertise (ETL, ESB, API). • Programming experience, ideally in Python. • Understanding of big data technologies (e.g. Hadoop stack). • Knowledge of NLP and OCR methodologies. • Familiarity with Generative AI concepts and tools (advantageous). • Experience with containerisation (e.g. Docker) is desirable. • Background in industrial and/ More ❯