City of London, London, United Kingdom Hybrid / WFH Options
BeTechnology Group
systems. Deep knowledge of distributed and scalable systems, including proficiency with PostgreSQL, Ray, RabbitMQ, and Cassandra. Familiarity with big data technologies such as Hadoop, Spark, or Kafka . Experience with CI/CD Strong problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Excellent communication more »
East London, London, United Kingdom Hybrid / WFH Options
Be Technology
systems. Deep knowledge of distributed and scalable systems, including proficiency with PostgreSQL, Ray, RabbitMQ, and Cassandra. Familiarity with big data technologies such as Hadoop, Spark, or Kafka . Experience with CI/CD Strong problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Excellent communication more »
Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
BeTechnology Group
Python development Deep knowledge of distributed and scalable systems, including proficiency with PostgreSQL, RabbitMQ, and Cassandra. Familiarity with big data technologies such as Hadoop, Spark, or Kafka . Strong problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Excellent communication and collaboration skills, with the more »
Cambridge, Cambridgeshire, East Anglia, United Kingdom Hybrid / WFH Options
Be Technology
Python development Deep knowledge of distributed and scalable systems, including proficiency with PostgreSQL, RabbitMQ, and Cassandra. Familiarity with big data technologies such as Hadoop, Spark, or Kafka . Strong problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Excellent communication and collaboration skills, with the more »
developing real-time analytics solutions, preferably with experience in time-series databases. • Proficiency in technologies relevant to real-time analytics, such as KX, Kafka, Spark, Flink, and real-time visualization tools. • Demonstrated ability to lead and mentor software engineering teams. • Excellent problem-solving skills and the ability to work more »
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL … . Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. more »
multithreading, database access, performance tuning and design patterns. Experience in a diverse set of technologies including SQL, Spring, Spring Boot, Hibernate, JPA, Junit, Mockito, ApacheSpark, Storm and related technologies. Practical experience in developing software products/solutions that are deployed on cloud (as PaaS, SaaS) using a more »
This is a fully remote position, you must be currently based in the UK to be consider. Skills/Technology Python Big Data tools (Spark, Hadoop, Flink) Data Pipelines/ETL Django/Flask more »
an Architect and excellent knowledge of Big Data -Excellence experience across Azure -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
an Architect and excellent knowledge of Big Data -Excellence experience across Azure -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
pertains to data storage and computing • Experience with data modeling, warehousing and building ETL pipelines • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR • Experience programming with at least one programming language such as C++, C#, Java, Python, Golang, PowerShell, Ruby • Experience with non-relational databases/ more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Xpertise Recruitment
using the tech you think is required! Skills desired/what you will learn: Microsoft Azure Azure SQL Microsoft Fabric Delta Lake, Databricks and Spark Statistical Modelling Azure ML Studio Python and familiarity with libraries and frameworks for data analysis and machine learning (e.g., TensorFlow, sci-kit-learn, Pandas more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed- in a more »
end ownership • Python or similar (Ruby or Node) or another Functional Language • JavaScript and associated frameworks, preferably Vue, or similar • Cloud technologies • SQL (advantageous) • Spark (advantageous) • Docker/Kubernetes – advantageous ) • MongoDB, SQL, Postgres & Snowflake (advantageous) • Developing online, cloud based SaaS products. • Leading and building scalable architectures and distributed systems more »
EXPERIENCE: Experience using JavaScript or Python Experience deploying software into the cloud and on-premise Developing software products EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi Experience with microservices architecture Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You’ll need to come from a strong academic background with some commercial experience in a data heavy software more »
working with AWS technologies such as Lambda, ECS Fargate, API Gateway, RDS, DynamoDB, EMR building customer-facing applications and APIs building data pipelines using Spark + Scala that process Tb of data per day working with customers to understand the business context of new features participating in design reviews more »
the cloud, ideally including the use of scalable technologies such as cluster computing and containerization Experience working in parallel/scalable languages such as Spark or Dask Experience working with knowledge graphs and graph neural networks Biotech or biopharma experience, ideally within R&D Experience in a startup or more »
DW/BI systems · Demonstrated ability in data modeling, ETL development, and Data warehousing. · Strong experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) · Expertise in a BI solution like Power BI · Hands on experience in modelling databases (particularly nosql), working on indexes, materialized views, performance tuning … with impressive visualization (Power BI) · Experience in building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products more »
ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributed computing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working with authentication and authorisation protocols/ more »
Experience managing engineering teams across multiple disciplines Comprehensive understanding of cloud solution application Demonstrated experience building and maintaining production level enterprise solutions Experience of Spark, Scala or similar technologies Build solutions using optimised code, repeatable, reusable techniques An understanding of Dev Ops including Continuous Integration, Automation, Infrastructures as Code more »
wrangling, modelling, feature engineering and deployment What’s equally relevant is programming knowledge and a gift for coding using: R, Python, SQL, SAS or Spark You’ll also need the capability to deliver projects, collaborate with operational teams and be able to explain technical concepts to non-specialists What more »
wrangling, modelling, feature engineering and deployment What’s equally relevant is programming knowledge and a gift for coding using: R, Python, SQL, SAS or Spark You’ll also need the capability to deliver projects, collaborate with operational teams and be able to explain technical concepts to non-specialists What more »
Sheffield, England, United Kingdom Hybrid / WFH Options
Undisclosed
knowledge of security principles and best practices for cloud-based solutions. Preferred Skills: Certification in cloud platforms. Experience with big data technologies such as Apache Hadoop, Spark, or Kafka. Knowledge of data governance and compliance frameworks. Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). HSBC more »
MLOps experience, Experience with cloud computing platforms such as AWS, Azure, or GCP (Google Cloud Platform). Familiarity with big data technologies such as Apache Hadoop, Spark, or Kafka. Experience deploying machine learning models in production environments. Contributions to open-source machine learning projects or research publications in more »