data modeling , distributed systems , streaming architectures , and ETL/ELT pipelines . Proficiency in SQL and at least one programming language such as Python , Scala , or Java . Demonstrated experience owning and delivering complex systems from architecture through implementation. Excellent communication skills with the ability to explain technical concepts to More ❯
processes . Strong SQL skills , with the ability to write optimized and scalable queries. Or, Proficiency in at least one programming language ( Python , Java, Scala, or .NET). CI/CD : Experience using CI/CD pipelines for development and deployment of data pipelines. Proficiency in Git-based workflows and More ❯
advantageous Experience of knowledge of containers such as Docker and Kubernetes is advantageous Familiarity with at least one programming language (e.g. Python, Java, or Scala) Proven experience of data warehousing concepts and ETL processes Strong analytical skills and attention to detail Excellent verbal and written communication skills in English is More ❯
looking for? Experience in the design and deployment of production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala, Spark, SQL. Experience performing tasks such as writing scripts, extracting data using APIs, writing SQL queries etc. Ability to closely with other engineering teams to More ❯
you will: Design and deploy production data pipelines from ingestion to consumption within a big data architecture. Work with technologies such as Python, Java, Scala, Spark, and SQL to extract, clean, transform, and integrate data. Build scalable solutions using AWS services like EMR, Glue, Redshift, Kinesis, Lambda, and DynamoDB. Process More ❯
help with their data presentation requirements and help deliver data for visualisation solutions, such as Power BI Key skills required: Expertise in Python, Java, Scala, or other programming languages used to build data pipelines, implement data transformations, and automate data workflows. Strong SQL experience. Strong knowledge of data visualisation tools More ❯
TDD You have a thorough understanding of Computer Science fundamentals such as OOP, Design Patterns, Data Structures, Algorithms Other tech in the stack includes Scala, React, Spring, Oracle, Redis, Kubernetes, Docker and Linux so previous exposure to any of these would be beneficial You're collaborative with good communication skills More ❯
Requirements of the Database Engineer: Proven experience with Databricks, Azure Data Lake, and Delta Live Tables Strong programming in Python and Spark (PySpark or Scala) Solid knowledge of data modelling, warehousing, and integration concepts Comfortable working in Agile teams, with CI/CD and Azure DevOps experience Package: Salary depending More ❯
to encourage their development and strengthen the team's overall skillset. Skills Profile Strong command of JVM-based languages, with openness to Kotlin, Java, Scala, or Groovy. Experience delivering scalable, production-quality software solutions end-to-end. Focused on outcomes, with practical experience in service-oriented and microservices architectures. Hands More ❯
Minimum 3 years of experience as a software engineer. 2+ years of experience with React/TypeScript and an Object-Oriented Programming Language (Java, Scala, Kotlin, C#, C++). Proven experience in building and shipping software, involving collaboration with other engineers, designers, product managers, and users to identify and implement More ❯
tools, and processes Essential Experience: Proven commercial experience with Databricks (including AKS) Strong hands-on experience with AWS Proficient in Python and/or Scala for data pipeline development Solid understanding of data warehousing concepts and ETL best practices Desirable Experience: Exposure to Power BI for data visualisation and dashboarding More ❯
grade DAGs and solid dependency management, knowledge of dbt is preferable Strong background in building reliable, scalable batch and streaming pipelines using Spark (ideally Scala) Python and SQL Hands-on experience implementing data quality, observability, and lineage systems across distributed environments Proven leadership in technical design, platform adoption, and mentoring More ❯
Stoke-on-Trent, Staffordshire, UK Hybrid / WFH Options
bet365
from home policy. Preferred skills and experience Server side development (services, systems, messaging, middleware). Exposure to functional programming (such as Erlang, Haskell, F#, Scala or Go). Experience of distributed systems. Complex event process/continuous query languages. Client/server development experience. Ability to problem solve. Excellent communication More ❯
a day, solving real-world user challenges. Your Experience 5+ years of backend development experience with strong architectural skills. Experience with functional programming (Elixir, Scala, Go, Node.js, Haskell, etc.). Cloud infrastructure expertise (AWS, Terraform) with experience at scale. Deep understanding of distributed systems, orchestration, and observability. Knowledge of GenAI More ❯
Data Scientist with Databricks Experience Salary-up to £90K base + bonus + benefits Location- Work from Home but accessible to travel to London when needed Our client is an international company that requires a senior Data Scientist with experience More ❯
Collaborate with various squads within the data team on project-based work. Develop and optimize data models, warehouse solutions, and ETL processes. Work with Scala, Spark, and Java to handle large-scale data processing. Contribute to manual Databricks-like data processing solutions. Requirements: Minimum of 4 years of experience with … Scala, Spark, and Java. Strong technical skills and a passion for working with data. A STEM degree or equivalent experience. Excellent communication skills. Experience with data modeling and data warehouse solutions. Nice to Have: Experience with AWS and Python. Interview Process CV run through Take home test Panel interview More ❯
is to deliver high-quality, scalable solutions that make a meaningful difference to users and clients alike. The Role: We are looking for experienced Scala Developers to join our expanding delivery team. You will play a key role in designing, developing, and deploying modern, scalable backend systems, primarily using Scala … technical discussions, and mentor junior developers. Continuously improve system architecture, performance, and reliability. Technical Skills & Experience: 4–5 years of professional experience as a Scala Developer. Strong understanding and hands-on experience with Play Framework, Cats, and Akka . Experience working with JavaScript for frontend integration or small UI tasks. More ❯
teams and leadership with data and insight Ability to thrive in a fast-paced, scale-up environment Excellent communication and cross-functional collaboration skills Scala is acting as an agency is recruiting this position. More ❯
Lead Data Engineer - Hybrid working 3 days per week onsite As a Lead Data Engineer, you’ll be instrumental in driving innovation through advanced analytics, AI, cloud technologies, and data science. You will help build a new Data & Analytics function More ❯
on exciting projects and collaborate with experienced developers. You will play a key role in designing, developing, and maintaining scalable software solutions using Java, Scala, AWS, and the Spring framework. Key Responsibilities: Develop high-quality software applications using Java, Scala, and Spring framework. Collaborate with cross-functional teams to understand … Computer Science, Software Engineering, or a related field. Proficiency in Java programming language and experience with JVM-based frameworks such as Spring. Familiarity with Scala programming language and its ecosystem. Basic understanding of cloud computing concepts and experience with AWS services (e.g., EC2, S3, Lambda, DynamoDB). Strong problem-solving More ❯
Markets, or at least Banking experience is a MUST! Key expertise and experience we’re looking for: Data Engineering in Databricks – Spark programming with Scala, Python, SQL and Ideally experience with Delta Lake or Databricks workflows, jobs, etc. Familiarity with Azure Data Lake: experience with data ingestion and ETL/… a bonus – experience in similar tools is valuable (Collibra, Informatica Data Quality/MDM/Axon etc.) Data Architecture experience is a bonus Python, Scala, Databricks Spark and Pyspark with Data Engineering skills Ownership and ability to drive implementation/solution design More ❯