engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., Apache Spark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB) and data warehousing More ❯
Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (Apache Spark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration into data architectures. More ❯
projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience with orchestration tools (Chef, Puppet, Ansible) and ETL workflows (Airflow, Luigi) Familiarity with cloud More ❯
from you: Proficiency with the Python and SQL programming languages. Hands-on experience with cloud platforms like AWS, GCP, or Azure, and familiarity with big data technologies such as Hadoop or Spark. Experience working with relational databases and NoSQL databases. Strong knowledge of data structures, data modelling, and database schema design. Experience in supporting data science workloads and working More ❯
learning through internal and external training. What you'll bring Mandatory Proficient in either GCP (Google) or AWS cloud Hands on experience in designing and building data pipelines using Hadoop and Spark technologies. Proficient in programming languages such as Scala, Java, or Python. Experienced in designing, building, and maintaining scalable data pipelines and applications. Hands-on experience with Continuous More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with orchestration tools (e.g. Airflow, Luigi) and cloud platforms (e.g. AWS, GCP). Passion for More ❯
with Spark. Experience building, maintaining, and debugging DBT pipelines. Strong proficiency in developing, monitoring, and debugging ETL jobs. Deep understanding of SQL and experience with Databricks, Snowflake, BigQuery, Azure, Hadoop, or CDP environments. Hands-on technical support experience, including escalation management and adherence to SLAs. Familiarity with CI/CD technologies and version control systems like Git. Expertise in More ❯
Azure Data Factory, Azure Functions, and Synapse Analytics. Proficient in Python and advanced SQL, including query tuning and optimisation. Hands-on experience with big data tools such as Spark, Hadoop, and Kafka. Familiarity with CI/CD pipelines, version control, and deployment automation. Experience using Infrastructure as Code tools like Terraform. Solid understanding of Azure-based networking and cloud More ❯
SQL technologies (e.g. MS SQL, Oracle) NoSQL technologies (e.g. MongoDB, InfluxDB, Neo4J) Data exchange and processing tools (ETL, ESB, API) Programming experience (Python or similar) Big data tools (e.g. Hadoop stack) Knowledge of NLP, OCR, and Generative AI is advantageous Experience with containerisation (e.g. Docker) is beneficial Familiarity with defence or industrial sectors is a plus Benefits Package: Annual More ❯
Guildford, Surrey, United Kingdom Hybrid / WFH Options
BAE Systems (New)
should take pride in the software you produce. Apache Maven Desirable Skills and Experience Industry experience of any of the following technologies/skills would bebeneficial: Elasticsearch Docker ApacheHadoop, Kafka or Camel. Javascript Knowledge of both Windows and Linux operating systems. Kubernetes Nifi Nsq Apache Ignite Arango Why BAE Systems? This is a place where you'll be More ❯
Frimley, Surrey, United Kingdom Hybrid / WFH Options
BAE Systems (New)
should take pride in the software you produce. Apache Maven Desirable Skills and Experience Industry experience of any of the following technologies/skills would bebeneficial: Elasticsearch Docker ApacheHadoop, Kafka or Camel. Javascript Knowledge of both Windows and Linux operating systems. Kubernetes Nifi Nsq Apache Ignite Arango Why BAE Systems? This is a place where you'll be More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid / WFH Options
Anson Mccade
Server, Oracle or similar - Experience with ETL, API, ESL or similar - Hands on with coding languages such as Python, Bash or similar - Knowledge of Big Data tech such as Hadoop or simlar - Knowledge in the following areas are highly desirable but not essential - AI, Natural Language Processing, OCR - Experience with containerisation via Docker is highly advantageous - Please not roles More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
We are looking for someone with experience managing System Integrators, Data Warehouse, ETL, and Hadoop Implementation companies, cloud, and data alliances worldwide, with a focus on the EMEA markets. The Skills You'll Need: You have at least 10 -15+ years of relevant partner management and sales experience. You are a team player and enjoy making others successful. More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Randstad Technologies
Advert Hadoop Engineer 6 Months Contract Remote working £300 to £350 a day A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools. The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems. The successful candidate should have … for Scripting Apache Spark Prior experience of building ETL pipelines Data Modelling 6 Months Contract - Remote Working - £300 to £350 a day Inside IR35 If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD More ❯