Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
applications through bug fixes and code refactoring. Leverage the latest data technologies and programming languages, including Python, Scala, and Java, along with systems like Spark, Kafka, and Airflow, within cloud services such as AWS. Ensure the ongoing maintenance, troubleshooting, optimization, and reliability of data systems, including timely resolution of … NoSQL databases (e.g., PostgreSQL, MongoDB) and data modeling principles. Proven ability to design, build, and maintain scalable data pipelines and workflows using tools like Apache Airflow or similar. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Nice to have: Hands-on experience with data warehouse and … lakehouse architectures (e.g., Databricks, Snowflake, or similar). Experience with big data frameworks (e.g., ApacheSpark, Hadoop) and cloud platforms (e.g., AWS, Azure, or GCP). More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape More ❯
We are seeking a highly experienced Java Developer to join our team in Bournemouth. The ideal candidate will bring strong expertise in Java, Python, Spark, and cloud-based development, especially with AWS services. This is a hands-on development role focused on building scalable, resilient microservices for enterprise-grade … native technologies including AWS (RDS, S3, EKS, Lambda, etc.) Implement infrastructure-as-code using Terraform Develop and debug large-scale distributed systems using Java, Spark, and Spring Boot Ensure smooth integration with messaging systems like Kafka and IBM MQ Optimize performance of big file handling and data-intensive applications …/CD Contribute to platform resiliency and production support Must-Have Skills 10+ years of experience in backend development Proficient in Java , Python , and ApacheSpark Strong hands-on experience with AWS services (EKS, ECS, Lambda, RDS, Aurora, SQS, SNS) Deep knowledge of Spring/Spring Boot Familiarity More ❯
Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: ApacheSpark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience More ❯
your cloud/platform engineering capabilities Experience working with Big Data Experience of data storage technologies: Delta Lake, Iceberg, Hudi Knowledge and understanding of ApacheSpark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Competence in evaluating and selecting development tools and More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
Mars
shape our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, Delta Lake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
esure Group
software practices (SCRUM/Agile, microservices, containerization like Docker/Kubernetes). we'd also encourage you to apply if you possess: Experience with Spark/Databricks. Experience deploying ML models via APIs (e.g., Flask, Keras). Startup experience or familiarity with geospatial and financial data. The Interview Process More ❯
elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible, Chef, Puppet, etc. Experience with More ❯
on experience in tools like Snowflake, DBT, SQL Server, and programming languages such as Python, Java, or Scala. Proficient in big data tools (e.g., Spark, Kafka), cloud platforms (AWS, Azure, GCP), and embedding AI/GenAI into scalable data infrastructures. Strong stakeholder engagement and the ability to translate technical More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
X4 Technology
with experience in AI frameworks (e.g., TensorFlow, PyTorch, Keras). Strong knowledge of data analysis, visualisation tools, and big data technologies (e.g., SQL, Hadoop, Spark). Experience with machine learning techniques, including NLP, recommendation algorithms, and computer vision. Familiarity with cloud platforms (Azure) for deploying AI solutions . Strong More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
Algo Capital Group
tools and alerting frameworks SQL experience including queries/updates/table creation/basic database maintenance Exposure to data technologies such as Kafka, Spark or Delta Lake is useful but not mandat Bachelor's degree in Computer Science, Engineering, or related technical field This role offers competitive compensation More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
Bright Purple Resourcing
Azure data services Deep understanding of Data Management practices: governance, security, quality, and compliance Proficiency in SQL and familiarity with Azure DevOps Pipelines and ApacheSpark Excellent stakeholder engagement and communication skills Relevant certifications (e.g., Azure Solutions Architect Expert or TOGAF) a plus Must be eligible for SC More ❯
extendable) Employment Type: Contract B2B {Inside IR35} Roles & Responsibilities: 10+ years exp only Exposure to Cloud technologies Proficient in coding in Java, Python and Spark Hands-on experience working with AWS stack/services Hands-on experience with Java and Spring Working knowledge of AWS RDS/Aurora Database More ❯
Work Visa, ICT, PSW, Global Mobility Visa and Business visa holders. Job Description: • Exposure to Cloud technologies • Proficient in coding in Java, Python and Spark • Hands-on experience working with AWS stack/services • Hands-on experience with Java and Spring • Working knowledge of AWS RDS/Aurora Database More ❯
/structured data handling Ability to work independently and collaboratively in cross-functional teams Nice to have: Experience with big data tools such as Spark, Hadoop, or MapReduce Familiarity with data visualisation tools like QuickSight, Tableau, or Looker Exposure to microservice APIs and public cloud ecosystems beyond AWS AWS More ❯
drive team alignment Work closely with stakeholders to translate business needs into scalable solutions Tech environment includes Python, SQL, dbt, Databricks, BigQuery, Delta Lake, Spark, Kafka, Parquet, Iceberg (If you haven’t worked with every tool, that’s totally fine — my client values depth of thinking and engineering craft More ❯
bristol, south west england, United Kingdom Hybrid / WFH Options
LHH
secure or regulated environments Ingest, process, index, and visualise data using the Elastic Stack (Elasticsearch, Logstash, Kibana) Build and maintain robust data flows with Apache NiFi Implement best practices for handling sensitive data, including encryption, anonymisation, and access control Monitor and troubleshoot real-time data pipelines to ensure high … experience as a Data Engineer in secure, regulated, or mission-critical environments Proven expertise with the Elastic Stack (Elasticsearch, Logstash, Kibana) Solid experience with Apache NiFi Strong understanding of data security, governance, and compliance requirements Experience building real-time, large-scale data pipelines Working knowledge of cloud platforms (AWS … with a strong focus on data accuracy, quality, and reliability Desirable (Nice to Have): Background in defence, government, or highly regulated sectors Familiarity with Apache Kafka, Spark, or Hadoop Experience with Docker and Kubernetes Use of monitoring/alerting tools such as Prometheus, Grafana, or ELK Understanding of More ❯
and migrating a host of applications suites to a Kubernetes infrastructure environment. Requirements: Kubernetes implementation, Kubernetes-specific architecture, hands-on migration work, Kubernetes Security Spark S3 Engine Terraform Ansible CI/CD Hadoop Linux/RHEL – on prem background/container management Grafana or Elastic Search– for observability Desirable More ❯
with data warehousing concepts Experience in Airflow or other workflow orchestrators Familiarity with basic principles of distributed computing Experience with big data technologies like Spark, Delta Lake or others Proven ability to innovate and leading delivery of a complex solution Excellent verbal and written communication - proven ability to communicate … to work with shifting deadlines in a fast-paced environment Desirable Qualifications Authoritative in ETL optimisation, designing, coding, and tuning big data processes using Spark Knowledge of big data architecture concepts like Lambda or Kappa Experience with streaming workflows to process datasets at low latencies Experience in managing data More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Actica Consulting Limited
learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks … analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Data engineering approaches; Database management, e.g. MySQL, Postgress; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity More ❯
Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow … etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Salary: 30000 per annum + benefits Apply For This Job If you would like to apply More ❯
and implementation experience with Microsoft Fabric (preferred), along with familiarity with Azure Synapse and Databricks. Experience in core data platform technologies and methods including Spark, Delta Lake, Medallion Architecture, pipelines, etc. Experience leading medium to large-scale cloud data platform implementations, guiding teams through technical challenges and ensuring alignment … Strong knowledge of CI/CD practices for data pipelines, ensuring automated, repeatable, and scalable deployments. Familiarity with open-source data tools such as Spark, and an understanding of how they complement cloud data platforms. Experience creating and maintaining structured technical roadmaps, ensuring successful delivery and future scalability of More ❯