data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker). · Related/complementary open-source software platforms and languages (e.g., Scala, Python, Java, Linux). · Experience with both relational (RDBMS) and non-relational databases. · Analytical and problem-solving skills applied to big data datasets. · Experience working on projects with agile/ More ❯
are looking for data engineers who have a variety of different skills which include some of the below. Strong proficiency in at least one programming language (Python, Java, or Scala) Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures SQL and NoSQL databases Distributed computing frameworks (Spark, Kinesis etc) Software development best practices More ❯
platforms supporting both batch and real-time processing architectures. Deep understanding of data warehousing, ETL/ELT pipelines, and analytics engineering principles. Proficient in programming languages such as Python, Scala, or Java, and experienced with cloud platforms (AWS, GCP, or Azure). Experience working with privacy-sensitive data and implementing comprehensive observability and governance solutions. Strong technical foundation with a More ❯
with Microsoft Fabric – including Lakehouse architecture, OneLake storage design, semantic modeling, and Power BI integration best practices Hands-on experience with Microsoft Fabric and OneLake Experience with Python or Scala for data processing is preferred Exposure to CI/CD workflows and DevOps practices in a data engineering context Comfortable working in agile, collaborative environments with distributed teams Exceptional interpersonal More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Osmii
Data Factory, Azure Data Lake Storage, Azure Synapse) and architecting cloud-native data platforms. Programming Proficiency: Expert-level skills in Python (PySpark) and SQL for data engineering and transformation. Scala is a strong plus. Data Modelling: Strong understanding and practical experience with data warehousing, data lake, and dimensional modelling concepts. ETL/ELT & Data Pipelines: Proven track record of designing More ❯
up to date with new technology developments to identify cutting-edge innovations for potential future projects. Requirements Expertise in development languages including but not limited to: Java/J2EE, Scala/Python, XML, JSON, SQL, Spring/Spring Boot. Expertise with RESTful web services, Spark, Kafka etc. Experience with relational SQL, NoSQL databases and cloud technologies such as AWS/ More ❯
Social network you want to login/join with: Our client a London based Technology and Data Engineering leader have an opportunity in a high growth AI Lab for an ‘AI Engineering Researcher' A UK based 'Enterprise' Artificial Intelligence organisation More ❯
like Redshift, Athena, EMR, and QuickSight. Understanding of data modeling, ETL, and data warehousing concepts. Skills in statistical analysis, data mining, and machine learning. Proficiency in Python, R, or Scala for data analysis. Experience with SQL, NoSQL, and data visualization tools. Strong analytical and problem-solving skills. Experience with social media analytics and user behavior analysis. Knowledge of big data More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
Azure, Snowflake and Terraform is advantageous Experience of knowledge of containers such as Docker and Kubernetes is advantageous Familiarity with at least one programming language (e.g. Python, Java, or Scala) Proven experience of data warehousing concepts and ETL processes Strong analytical skills and attention to detail Excellent verbal and written communication skills in English is essential Airline experience is advantageous More ❯
Social network you want to login/join with: Lead Data Scientist (Equity Only) - 1%, Oxford district col-narrow-left Client: Location: Oxford district, United Kingdom Job Category: Other - EU work permit required: Yes col-narrow-right Job Views: 4 More ❯
Social network you want to login/join with: Lead Data Scientist (Equity Only) - 1%, High Wycombe Client: Location: High Wycombe, United Kingdom Job Category: Other EU work permit required: Yes Job Views: 4 Posted: 16.06.2025 Expiry Date: 31.07.2025 Job More ❯
GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some cases. Deep understanding of data lakehouse design, event-driven architecture, and hybrid cloud data strategies. Strong proficiency in SQL More ❯
What You Bring: 2+ years in data engineering or related roles Bachelor’s in CS, Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with More ❯
and deliver high-quality data solutions. Automation: Implement automation processes and best practices to streamline data workflows and reduce manual interventions. Must have: AWS, ETL, EMR, GLUE, Spark/Scala, Java, Python. Good to have: Cloudera – Spark, Hive, Impala, HDFS, Informatica PowerCenter, Informatica DQ/DG, Snowflake Erwin. Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering More ❯
in a leadership or managerial role. Strong background in financial services, particularly in market risk, counterparty credit risk, or risk analytics Proficiency in modern programming languages (e.g., Java, Python, Scala) and frameworks. Experience with cloud platforms (AWS, Azure, or GCP). Deep understanding of software development lifecycle (SDLC), agile methodologies, and devOps practices. Preferred Skills: Strong communication and stakeholder management More ❯
Slough, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
control, task tracking). Demonstrable experience writing ETL scripts and code to make sure the ETL processes perform optimally. Experience in other programming languages for data manipulation (e.g., Python, Scala). Extensive experience of data engineering and the development of data ingest and transformation routines and services using modern, cloud-based approaches and technologies. Understanding of the principles of data More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
concepts to a range of audiences. Able to provide coaching and training to less experienced members of the team. Essential Skills: Programming Languages such as Spark, Java, Python, PySpark, Scala or similar (minimum of 2). Extensive Big Data hands-on experience across coding/configuration/automation/monitoring/security is necessary. Significant AWS or Azure hands-on … to Work in the UK long-term as our client is NOT offering sponsorship for this role. KEYWORDS Lead Data Engineer, Senior Lead Data Engineer, Spark, Java, Python, PySpark, Scala, Big Data, AWS, Azure, On-Prem, Cloud, ETL, Azure Data Fabric, ADF, Databricks, Azure Data, Delta Lake, Data Lake. Please note that due to a high level of applications, we More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and build Reference and metadata management What else More ❯
Lead Data Scientist (Equity Only) - 1% Job Description: As a Lead Data Scientist at Luupli, you will leverage AWS analytics services to analyze data and provide insights. Collaborate with cross-functional teams to develop data-driven solutions that support business More ❯