frameworks and practices. Understanding of machine learning workflows and how to support them with robust data pipelines. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus. DESIRABLE LANGUAGES/TOOLS Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling. Experience in big data technologies and frameworks such as Databricks More ❯
Join to apply for the Azure data Engineer role at IBM 3 days ago Be among the first 25 applicants Join to apply for the Azure data Engineer role at IBM Get AI-powered advice on this job and more More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Net Talent
Azure). Deep understanding of data modeling , distributed systems , streaming architectures , and ETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through implementation. Excellent communication skills with the ability to explain technical concepts to both technical and non-technical More ❯
City of Westminster, England, United Kingdom Hybrid / WFH Options
nudge Global Ltd
focus on data mining and big data processing Strong knowledge of open banking frameworks, protocols, and APIs (e.g. Open Banking UK, PSD2) Proficiency in programming languages such as Python, Scala, or Java Experience with cloud data platforms such as GCP (BigQuery, Dataflow) or Azure (Data Factory, Synapse) Expert in SQL, MongoDB and distributed data systems such as Spark, Databricks or More ❯
warehousing, big data platforms (e.g., Hadoop, Spark), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure). Ideally some programming skills in languages like Python, Java, or Scala, with experience in automation and scripting. Experience with containerization and orchestration tools like Docker and Kubernetes is a plus. Experience with data governance, data security, and compliance best practices. Understanding More ❯
familiarity with Jenkins or similar tools. Deep understanding of DevOps principles, data warehousing, data modeling, and data integration patterns. Proficient in one or more programming languages such as Python, Scala, or Java, alongside SQL and NoSQL database experience. Knowledge of data quality assurance techniques and governance frameworks. Excellent analytical, problem-solving, and communication skills. Desirable: Experience with containerization technologies (Docker More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
data pipelines for performance, efficiency, and cost-effectiveness. Implement data quality checks and validation rules within data pipelines. Data Transformation & Processing: Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies. Develop and maintain data processing logic for cleaning, enriching, and aggregating data. Ensure data consistency and accuracy throughout the data lifecycle. Azure Databricks Implementation: Work extensively More ❯
and Java Expertise in building modern data pipelines and ETL (extract, transform, load) processes using tools such as Apache Kafka and Apache Nifi Proficient in programming languages like Java, Scala, or Python Experience or expertise using, managing, and/or testing API Gateway tools and Rest APIs Experience in traditional database and data warehouse products such as Oracle, MySQL, etc. More ❯
in programming languages such as Python, R, and Java. Experience in building modern data pipelines and ETL processes with tools like Apache Kafka and Apache Nifi. Proficiency in Java, Scala, or Python programming. Experience managing or testing API Gateway tools and Rest APIs. Knowledge of traditional databases like Oracle, MySQL, etc., and modern data management technologies such as Data Lake More ❯
Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker). · Related/complementary open-source software platforms and languages (e.g., Scala, Python, Java, Linux). · Experience with both relational (RDBMS) and non-relational databases. · Analytical and problem-solving skills applied to big data datasets. · Experience working on projects with agile/ More ❯
Snowflake), writing complex SQL queries. · Building ETL/ELT/data pipelines. · Kubernetes and Linux containers (e.g., Docker). · Related/complementary open-source software platforms and languages (e.g., Scala, Python, Java, Linux). · Experience with both relational (RDBMS) and non-relational databases. · Analytical and problem-solving skills applied to big data datasets. · Experience working on projects with agile/ More ❯
to problem solve. Knowledge of AWS or equivalent cloud technologies. Knowledge of Serverless technologies, frameworks and best practices. Experience using AWS CloudFormation or Terraform for infrastructure automation. Knowledge of Scala or OO such as Java or C#. SQL or Python development experience. High-quality coding and testing practices. Willingness to learn new technologies and methodologies. Knowledge of agile software development More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
degree or higher in an applicable field such as Computer Science, Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
Strong understanding of DevOps methodologies and principles. Solid understanding of data warehousing data modeling and data integration principles. Proficiency in at least one scripting/programming language (e.g. Python Scala Java). Experience with SQL and NoSQL databases. Familiarity with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication interpersonal and presentation skills. Desired More ❯
/schemas and developing ETL/ELT scripts in large organisations or data platforms. Strong experience leading teams and managing individuals. Strong Python and other programming skills (Spark/Scala desirable). Experience both using and building APIs. Strong SQL background. Exposure to big data technologies (Spark, Hadoop, Presto, etc.). Works well collaboratively, and independently, with a proven ability More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
KPMG UK
to having resided in the UK for at least the past 5 years and being a UK national or dual UK national. Experience in prominent languages such as Python, Scala, Spark, SQL. Experience working with any database technologies from an application programming perspective - Oracle, MySQL, Mongo DB etc. Experience with the design, build and maintenance of data pipelines and infrastructure More ❯
London, England, United Kingdom Hybrid / WFH Options
Lloyds Banking Group
on our journey and you will too... WHAT WE NEED YOU TO HAVE EXPERIENCE IN Coding Coding/scripting experience developed in a commercial/industry setting (Python, Java, Scala or Go, and SQL). Databases & frameworks Strong experience working with Kafka technologies. Working experience with operational data stores , data warehouse , big data technologies , and data lakes . Experience working More ❯
London, England, United Kingdom Hybrid / WFH Options
Nadara
Spark, Delta Lake, etc). Familiarity with both relational (e.g., SQL Server, PostgreSQL, MySQL, Oracle) and NoSQL databases (e.g., MongoDB, Cassandra). Strong SQL skills; experience in Python or Scala for data engineering tasks; familiarity with version control systems. Understanding of enterprise data architecture patterns, including data lake, data warehouse, lakehouse, and cloud-native designs. Experience with Inmon, Data Vault More ❯
services (AWS, GCP, or Azure) Understanding of data modeling, distributed systems, ETL/ELT pipelines, and streaming architectures Proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java) Demonstrated experience owning complex technical systems end-to-end, from design through production Excellent communication skills with the ability to explain technical concepts to both technical and non More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Somerset Bridge Group
Synapse Analytics. Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks. Strong Python (PySpark) skills for big data processing and automation. Experience with Scala (optional but preferred for advanced Spark applications). Experience working with Databricks Workflows & Jobs for data orchestration. Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store More ❯