best-of-breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
programming language, ideally Python but can also be Java or C/c++ SQL expeirence Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Get in touch with Ella Alcott - Ella@engagewithus.com more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices Dublin more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
london, south east england, United Kingdom Hybrid / WFH Options
Careers at MI5, SIS and GCHQ
junior members of the team and influencing them with your vision. Our tech stacks vary between products (such as OracleDB, MongoDB, Elastic Search and Hadoop for data storage and a mixture of commercial-off-the-shelf products and custom applications. We embrace a DevSecOps (Development, Security, and operations) mindset more »
Science or related field. 4+ Years as a practical Data Engineer Proficiency in Python, Java, or Scala programming. Familiarity with big data frameworks (Spark, Hadoop, etc..) Nice to have: Cloud migration experience with AWS, AZURE or GCP If this role has piqued your interest and you would like to more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
best-of-breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
birmingham, midlands, United Kingdom Hybrid / WFH Options
⭕️ Nimbus®
such as Python, C#, .Net and/or JavaScript is highly desirable. Experience with cloud platforms (e.g., Azure) and data technologies (e.g., SQL, NoSQL, Hadoop, Spark). PLEASE NOTE: You must have either UK citizenship or permanent leave to remain in the UK. Due to the high volume of more »
Manchester, England, United Kingdom Hybrid / WFH Options
Lorien
DynamoDB/etc.) Solid understanding of data governance principles and how to implement these across the business Knowledge of Big Data technology (Spark/Hadoop/etc.) Excellent communication skills across various levels of stakeholders Benefits: Salary available £120,000 Bonus scheme Enhanced pension contribution available Genuine opportunity to more »
Months Location: Milton Keynes Key responsibilities: Demonstrable experience as an Kafka Developer (Ideally Kafka Streams). Hand on experience in Big data technologies (Hadoop, Hue, Hive, Impala,Spark,.) and stronger within Kafka. Knowledge and experience using Key value data bases . Experience developing microservices using Spring. Design and more »
Dayrate :£450 Job Purpose and Primary Objectives We are seeking a highly experienced Kafka Developer with expertise in Kafka Streams and Big Data technologies (Hadoop, Hue, Hive, Impala, Spark). The ideal candidate will have strong knowledge of key-value databases and experience in developing microservices using Spring. The … time data services for various 24/7 applications with high-performance requirements. Key Skills/Knowledge Solid experience in Big Data technologies, specifically Hadoop, Hive, Java and Spark/Scala . Advanced SQL knowledge for testing changes and replicating code functionality. Proficient with code repositories like GIT and more »
integrity can be maintained as part of business improvement plans which affect the organisation Review, manage and lead the development of data frameworks (e.g. Hadoop) and analysis of data to ensure accuracy of sources and data resilience Communications and Engagement Identify and understand the business needs, prioritise, and design … recognising entities in free text Experience creating and developing SQL server queries and/or stored procedures Experience using and developing data frameworks (e.g. Hadoop) Experience developing and scripting dashboards and data visualisations using tools such as QlikView, QlikSense and Tableau Experience interpreting and analysing complex data sets from more »
Milton Keynes, Buckinghamshire, South East, United Kingdom
Maclean Moore Ltd
Months Location: Milton Keynes Key responsibilities: Demonstrable experience as an Kafka Developer (Ideally Kafka Streams). Hand on experience in Big data technologies (Hadoop, Hue, Hive, Impala,Spark,.) and stronger within Kafka. Knowledge and experience using Key value data bases . Experience developing microservices using Spring. Design and … data services for different applications ,usually 24/7 applications with a big performance requirement . Key skills/knowledge/experience: Big Data Hadoop - Hive and Spark/Scala solid experience SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL more »
london, south east england, United Kingdom Hybrid / WFH Options
Solirius Consulting
or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands on coding experience more »
ETL processes, and data warehousing. - Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. more »
london, south east england, United Kingdom Hybrid / WFH Options
McGregor Boyall
models, ETL processes, and data warehousing solutions. Programming: Utilize Python, Java, Scala, or GoLang to build and optimize data pipelines. Distributed Processing: Work with Hadoop, Spark, and other platforms for large-scale data processing. Real-Time Data Streaming: Develop and manage pipelines using CDC, Kafka, and Apache Spark. Database more »
one leading business intelligence platform (e.g. Microsoft, Crystal, Qlik, SAP, Tableau). Good understanding of open source, big data, and cloud data platforms (e.g. Hadoop, Spark, Hive, Pentaho, AWS, Azure); given a business problem, you can analyse and evaluate options and recommend solutions. Proven experience in designing, building and more »
Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ years more »