leader with deep experience and a hands-on approach. You bring: A track record of scaling and leading data engineering initiatives Excellent coding skills (e.g. Python, Java, Spark, PySpark, Scala) Strong AWS expertise and cloud-based data processing Advanced SQL/database skills Delivery management and mentoring abilities Highly Desirable: Familiarity with tools like AWS Glue, Azure Data Factory, Databricks More ❯
statistical techniques and concepts (regression, properties of distributions, statistical tests, and proper usage, etc) experience with the systems development lifecycle hands-on coding experience with advanced Python (R and Scala will be considered as well) experience with SQL and NoSQL databases (eg, MSSQL, CosmosDB, Cassandra) familiarity with testing techniques used to plan and execute tests of all components (functional and More ❯
workflows. Working in a fast-paced environment where impact and outcomes matter more than process. What Were Looking For Solid experience with functional programming (Elixir is a bonus, but Scala, Go, Node.js, Haskell, Clojure, or F# are all great too). Strong knowledge of infrastructure at scale AWS, Terraform, and container orchestration (Docker). Experience with queue management and observability. More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools to boost … optimise large-scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with More ❯
innovative technical projects. As part of this role, you will be responsible for some of the following areas: Design and build distributed data pipelines using languages such as Spark, Scala, and Java Collaborate with cross-functional teams to deliver user-centric solutions Lead on the design and development of relational and non-relational databases Apply Gen AI tools to boost … optimise large-scale data collection processes Support the deployment of machine learning models into production To be successful in the role you will have: Creating scalable ETL jobs using Scala and Spark Strong understanding of data structures, algorithms, and distributed systems Experience working with orchestration tools such as Airflow Familiarity with cloud technologies (AWS or GCP) Hands-on experience with More ❯