junior members of the team and influencing them with your vision. Our tech stacks vary between products (such as OracleDB, MongoDB, Elastic Search and Hadoop for data storage and a mixture of commercial-off-the-shelf products and custom applications. We embrace a DevSecOps (Development, Security, and operations) mindset more »
/ELT tools.Experience with NoSQL type environments, Data Lakes, Lake-Houses (Cassandra, MongoDB or Neptune).Experience with distributed storage, processing engines such as ApacheHadoop and Apache Spark.Experience with message brokering/stream processing services such as Apache Kafka, Confluent, Azure Stream Analytics.Experience in Test Driven Development (TDD) and more »
or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience more »
such as TensorFlow, PyTorch, or Scikit-learn. Strong knowledge of statistical modelling, data mining, and data visualization techniques. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Strong problem-solving skills and the ability to think critically and creatively. Excellent analytical skills more »
of technology to automate data pipelines and build analytical warehouses· Deep understanding of cloud-based data platforms (Azure SQL DB, Azure Synapse, ADLS, AWS, Hadoop, Spark, Snowflake, No-SQL etc).· Proficient scripting in programming languages such as Java, Python, Scala· Expert in SQLMachine Learning· Good basic understanding of more »
London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
models, ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
TensorFlow, PyTorch). Solid understanding of ML and data pipeline architectures and best practices. Experience with big data technologies and distributed computing (e.g., Spark, Hadoop) is a plus. Proficient in SQL and experience with relational databases. Strong analytical and problem-solving skills, with a keen attention to detail. Knowledge more »
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
and NoSQL databases • Programming languages such as Spark or Python • Amazon Web Services, Microsoft Azure or Google Cloud and distributed processing technologies such as Hadoop Benefits : • Base Salary: £45,000 - £75,000 (DoE) • Discretionary Bonus: Circa 10% per annum • DV Bonus: Circa £5,000 • Flex Fund: £5000 • Health: Private more »
London, England, United Kingdom Hybrid / WFH Options
Global Relay
JMeter or similar tools Web services technology such as REST, JSON or Thrift Testing web applications with Selenium WebDriver Big data technology such as Hadoop, MongoDB, Kafka or SQL Network principles and protocols such as HTTP, TLS and TCP Continuous integration systems such as Jenkins or Bamboo Continuous delivery more »
scaling, and troubleshooting of cloud systems.- Operational experience running a 24x7 production infrastructure at scale.- Proficiency working with data structures, schemas, and technologies like Hadoop, Hive, Redis, and MySQL- Experience in using cloud-native services like GKE, EKS, AWS/GCP load balancing, AWS/GCP cloud storage platforms more »
South East London, England, United Kingdom Hybrid / WFH Options
Counter Terrorism Police
following: .NET (VB, C#, ASP.NET, .NET CORE) MVC Framework Python JavaScript (REACT, Bootstrap Frameworks) Database design SQL/SQL Server NoSQL technologies e.g., MongoDB, Hadoop, etc. If you’re the right person for the role, you’ll bring experience of working on a range of applications across the development more »
Testing performance with JMeter or similar toolsWeb services technology such as REST, JSON or ThriftTesting web applications with Selenium WebDriverBig data technology such as Hadoop, MongoDB, Kafka or SQLNetwork principles and protocols such as HTTP, TLS and TCPContinuous integration systems such as Jenkins or BambooContinuous delivery concepts#LI-HybridWhat you more »
standard methodology methods and tooling used across the engineering teamRequirements:5-7 years of Java experienceCapital markets Front office experienceExperience working with Data lake (Hadoop) consumption, specifically Hive experienceKafka experienceRules engine experience (Ideally open source/vendor products, e.g. Drools or Camunda)Unix scripting knowledgeMarkets Regulatory/Trade control more »
Key Skills 3+ years of Python experience Highly statistical and Analytical Exposure to Google Cloud Platform ( BigQuery, GCS, Datalab, Dataproc, Cloud ML (desirable) Spark & Hadoop experience Strong communication skills Good problem solving skills Qualifications Bachelor's degree or equivalent experience in a quantative field (Statistics, Mathematics, Computer Science, Engineering … and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) This is a permanent position, and offers flexibility with Hybrid working, 2-3 days per week in the office, depending on workload more »
London, England, United Kingdom Hybrid / WFH Options
Global Relay
a Global Relay DevOps engineer you will be integrated with a software engineering team to develop on premise ('on-prem') solutions including working with Hadoop based technologies. Your role will involve designing, implementing and supporting automated, scalable solutions. Your contribution will have an immediate impact of enabling efficient delivery … them from reoccurring. Deployments: Writing and running deployment automation tools using helm, ansible, or other configuration management systems Platform Integration: With technologies such as Hadoop and Kubernetes Some of the technologies that you will interact with include: Containerisation and virtualisation: Docker, Kubernetes, VMWare Operating Systems: Linux Build and deployment … Jenkins, Bitbucket, Maven, Helm Instrumentation and monitoring: Loki, Prometheus, Grafana, Mimir Languages and frameworks: Bash, Java, Groovy, Go, Python Big data technologies: Cassandra, ArangoDB, Hadoop, Kafka, MongoDB, Ceph Where you have knowledge gaps, training and mentoring will be provided. About You: You have an automation-first mindset. You enjoy more »
responsibilities will include building microservices, using Docker and Kubernetes and 3rd party API integrations. You will also be working with big data technologies like Hadoop, Kafka and Cassandra.This Senior Java Engineer (Multithreading) role will be a strong fit if you:Have extensive Core Java, multihreading, low-latency, concurrency experienceEnjoy … working with big data technologies like Hadoop (or if you are keen to learn!)Have messaging experience with Kafka, RabbitMQ or MongoDBAs the Senior Java Engineer, you will be a strong advocate for modern development ways of working including pair programming, Agile and BDD/TDD. This company (during more »