mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB) Experience More ❯
mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB) Experience More ❯
Reading, Berkshire, South East, United Kingdom Hybrid / WFH Options
Bowerford Associates
experience. ETL Tools such as Azure Data Fabric (ADF) and Databricks or similar ones. Data Lakes: Azure Data, Delta Lake, Data Lake or Databricks Lakehouse. Certifications: AWS, Azure, or Cloudera certifications are a plus. To be considered for this role you MUST have in-depth experience of Hadoop and HDFS. The role comes with an extensive benefits package including a More ❯
Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML, SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark Streaming) Data manipulation and wrangling techniques Development and deployment technologies (virtualisation, CI tools like Jenkins, configuration management with Ansible, containerisation with Docker More ❯
and AI solutions. Experience with cloud platforms, preferably IBM Cloud. Contributions to open-source projects or personal projects demonstrating Big Data and Java development skills. Relevant certifications such as Cloudera Certified Associate (CCA) or Hortonworks Certified Developer (HCD) are considered a plus. By joining IBM's Public Sector team as a Big Data Java Developer, you'll have the opportunity More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tec Partners
or Ansible . Comfortable working with cloud platforms (e.g., AWS, Azure, GCP) and container orchestration tools like Kubernetes . Excellent troubleshooting skills and a collaborative approach. Bonus: Experience with Cloudera , Hadoop , CDSW , or CML is a plus. What's on Offer: Flexible hybrid working arrangements Core benefits including private healthcare, dental, life assurance and pension Optional benefits including health cash More ❯
end users or large data sets with 10M+ database records. This is a very Big Data platform. Experience building REST services (orchestration layer) on CRUD data services based on Cloudera Hadoop stack, with an emphasis on performance optimization. Understanding how to secure data in a REST architecture. Knowledge of scaling web applications, including load balancing, caching, indexing, normalization, etc. Proficiency More ❯
Apache Hadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of bigdata tech stack. Requirement: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open More ❯
Apache Hadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of bigdata tech stack. Requirement: Experience of platform engineering along with application engineering (hands-on) Experience in design of an open More ❯