with tools like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience More ❯
automation & configuration management Ansible (plus Puppet, Saltstack), Terraform, CloudFormation NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript Big data processing and analysis, e.g. ApacheHadoop (CDH), Apache Spark RedHat Enterprise Linux, CentOS, Debian or Ubuntu Java 8, Spring framework (preferably Spring boot), AMQP - RabbitMQ Open source technologies Experience of More ❯
other engineers on the team to elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like More ❯
East London, London, United Kingdom Hybrid / WFH Options
McGregor Boyall Associates Limited
algorithms and training techniques . Experience deploying models in production environments. Nice to Have: Experience in GenAI/LLMs Familiarity with distributed computing tools (Hadoop, Hive, Spark). Background in banking, risk management, or capital markets . Why Join? This is a unique opportunity to work at the forefront More ❯
Tableau, Looker, or QlikSense . Ability to create well-documented, scalable, and reusable data solutions. Desirable Skills Experience with big data technologies such as Hadoop, MapReduce, or Spark. Exposure to microservice-based data APIs . Familiarity with data solutions in other public cloud platforms . AWS certifications (e.g., Solutions More ❯
Tableau, Looker, or QlikSense . Ability to create well-documented, scalable, and reusable data solutions. Desirable Skills Experience with big data technologies such as Hadoop, MapReduce, or Spark. Exposure to microservice-based data APIs . Familiarity with data solutions in other public cloud platforms . AWS certifications (e.g., Solutions More ❯
experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in database (e.g., SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) Experience in consulting, design, and implementation of serverless distributed solutions Experience in software development with object-oriented language PREFERRED QUALIFICATIONS AWS More ❯
Java or Python); Software collaboration and revision control (e.g., Git or SVN). Desired skills and experiences: ElasticSearch/Kibana Cloud computing (e.g., AWS) Hadoop/Spark etc. Graph Databases Educational level: Master Degree More ❯
Master's degree in statistics, data science, or an equivalent quantitative field. - Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. - Experience with theory and practice of information retrieval, data science, machine learning and data mining. - Experience with theory and practice of design of More ❯
Master's degree in statistics, data science, or an equivalent quantitative field. Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. Experience with theory and practice of information retrieval, data science, machine learning and data mining. Experience with theory and practice of design of More ❯
experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in database (e.g., SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) Experience in consulting, design and implementation of serverless distributed solutions Experience in software development with object-oriented language AWS experience preferred More ❯
broad IT skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of computer More ❯
london, south east england, united kingdom Hybrid / WFH Options
Kantar Media
broad IT skill set, including hands-on experience with Linux, AWS, Azure, Oracle 19 (admin), Tomcat, UNIX tools, Bash/sh, SQL, Python, Hive, Hadoop/HDFS, and Spark. Work within a modern cloud DevOps environment using Azure, Git, Airflow, Kubernetes, Helm, and Terraform. Demonstrate solid knowledge of computer More ❯
understanding of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects More ❯
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Data Inc. (UK) Ltd
contractors. Skill Set & Experience: We are specifically looking for a Scala Data Engineer not an application developer. The candidate must have experience migrating from Hadoop to the Cloud using Scala . Strong experience in Data Pipeline creation is essential. Candidates should have Big Data experience . Please ensure they … similar Data Engineering role before sharing their details with us. Keywords for Search: When reviewing CVs, please look for relevant technologies such as: Spark, Hadoop, Big Data, Scala, Spark-Scala, Data Engineer, ETL, AWS (S3, EMR, Glue ETL) . Interview Process: the client will conduct an interview round that More ❯
West Midlands, England, United Kingdom Hybrid / WFH Options
Aubay UK
like Tableau and Power BI. Proficiency in analytics platforms such as SAS and Python. Familiarity with Amazon Elastic File System (EFS), S3 Storage, and Hadoop Distributed File System (HDFS). Key Role Responsibilities Lead the design and development of large-scale data solutions, ensuring they meet business objectives and More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature selection More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature selection More ❯
at the heart of this business, and you can expect to work with a cutting-edge range of technologies, including big data tools (Spark, Hadoop) and cloud platforms (Microsoft Azure, AWS). If you are eager to grow in these areas, comprehensive, top-tier training will be provided. Key More ❯
requiring data analysis and visual support. Skills: • Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau. • The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on More ❯
data. Experience with Linux and cloud environments. Data Visualisation Technologies (e.g. Amazon QuickSight, Tableau, Looker, QlikSense). Desirable experience: Familiarity with large data techniques (Hadoop, MapReduce, Spark, etc.) Familiarity with providing data via a microservice API. Experience with other public cloud data lakes. AWS Certifications (particularly Solution Architect Associate More ❯
understanding of data and creation of reports and actionable intelligence. Required qualifications to be successful in this role Data Analysis experience, using SQL, ideally Hadoop or other Big Data environments. Experience with ETL and on ETL projects where you have been involved with data mappings and transformations. Analytical problem More ❯
tools Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. PREFERRED QUALIFICATIONS A master's degree in a relevant field. More than 6 years of experience in related fields. Demonstrated track record of More ❯
Experience with programming, ideally Python, and the ability to quickly pick up handling large data volumes with modern data processing tools, e.g. by using Hadoop/Spark/SQL. Experience with or ability to quickly learn open-source software including machine learning packages, such as Pandas, scikit-learn, along More ❯