bristol, south west england, United Kingdom Hybrid / WFH Options
LHH
on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of machine learning algorithms and data More ❯
Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi More ❯
other engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like More ❯
automation & configuration management Ansible (plus Puppet, Saltstack), Terraform, CloudFormation NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript Big data processing and analysis, e.g. ApacheHadoop (CDH), Apache Spark RedHat Enterprise Linux, CentOS, Debian or Ubuntu Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ Open source technologies Experience of More ❯
Tableau, Looker, or QlikSense . Ability to create well-documented, scalable, and reusable data solutions. Desirable Skills Experience with big data technologies such as Hadoop, MapReduce, or Spark. Exposure to microservice-based data APIs . Familiarity with data solutions in other public cloud platforms . AWS certifications (e.g., Solutions More ❯
understanding of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects More ❯
structured data handling Ability to work independently and collaboratively in cross-functional teams Nice to have: Experience with big data tools such as Spark, Hadoop, or MapReduce Familiarity with data visualisation tools like QuickSight, Tableau, or Looker Exposure to microservice APIs and public cloud ecosystems beyond AWS AWS certifications More ❯
understanding of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects More ❯
in production-grade code, with a focus on scalability and reliability. Experience with large-scale data analysis, manipulation, and distributed computing platforms (e.g., Hive, Hadoop). Familiarity with advanced machine learning methods, including neural networks, reinforcement learning, and other cutting-edge Gen AI approaches. Skilled in API development and More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Hargreaves Lansdown Asset Management Limited
in production-grade code, with a focus on scalability and reliability. Experience with large-scale data analysis, manipulation, and distributed computing platforms (e.g., Hive, Hadoop). Familiarity with advanced machine learning methods, including neural networks, reinforcement learning, and other cutting-edge Gen AI approaches. Skilled in API development and More ❯
Employment Type: Permanent, Part Time, Work From Home
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
production-grade code, with a focus on scalability and reliability. br Experience with large-scale data analysis, manipulation, and distributed computing platforms (e.g., Hive, Hadoop). br Familiarity with advanced machine learning methods, including neural networks, reinforcement learning, and other cutting-edge Gen AI approaches. br Skilled in API More ❯
Degree Required technical and professional expertise Design, develop, and maintain Java-based applications for processing and analyzing large datasets, utilizing frameworks such as ApacheHadoop, Spark, and Kafka. Collaborate with cross-functional teams to define, design, and ship data-intensive features and services. Optimize existing data processing pipelines for … Information Technology, or a related field, or equivalent experience. Experience in Big Data Java development. In-depth knowledge of Big Data frameworks, such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing More ❯