engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., Apache Spark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB) and data warehousing More ❯
tools and libraries (e.g. Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g. Hadoop, Spark) for large data processing Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication More ❯
Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (Apache Spark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration into data architectures. More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools to More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience with orchestration tools (Chef, Puppet, Ansible) and ETL workflows (Airflow, Luigi) Familiarity with cloud More ❯
from you: Proficiency with the Python and SQL programming languages. Hands-on experience with cloud platforms like AWS, GCP, or Azure, and familiarity with big data technologies such as Hadoop or Spark. Experience working with relational databases and NoSQL databases. Strong knowledge of data structures, data modelling, and database schema design. Experience in supporting data science workloads and working More ❯
Wales, Yorkshire, United Kingdom Hybrid / WFH Options
Made Tech Limited
as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using Apache Spark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling and reusable libraries Documenting and More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Made Tech Limited
as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using Apache Spark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling and reusable libraries Documenting and More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Made Tech Limited
as Code (IaC) and deploying infrastructure across environments Managing cloud infrastructure with a DevOps approach Handling and transforming various data types (JSON, CSV, etc.) using Apache Spark, Databricks, or Hadoop Understanding modern data system architectures (Data Warehouse, Data Lakes, Data Meshes) and their use cases Creating data pipelines on cloud platforms with error handling and reusable libraries Documenting and More ❯
DevOps tools (Git, Maven, Jenkins, CI/CD) Excellent communication and teamwork skills Bonus Points For Experience in payments, financial services, or fraud detection Familiarity with Big Data tools (Hadoop, Spark, Kafka) Exposure to cloud-native architecture (AWS, GCP, Azure) Understanding of TDD/BDD and modern testing frameworks What's On Offer Hybrid Working - Flexibility with 2 remote More ❯
learning through internal and external training. What you'll bring Mandatory Proficient in either GCP (Google) or AWS cloud Hands on experience in designing and building data pipelines using Hadoop and Spark technologies. Proficient in programming languages such as Scala, Java, or Python. Experienced in designing, building, and maintaining scalable data pipelines and applications. Hands-on experience with Continuous More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or More ❯
distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with orchestration tools (e.g. Airflow, Luigi) and cloud platforms (e.g. AWS, GCP). Passion for More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with orchestration tools (e.g. Airflow, Luigi) and cloud platforms (e.g. AWS, GCP). Passion for More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with orchestration tools (e.g. Airflow, Luigi) and cloud platforms (e.g. AWS, GCP). Passion for More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Tenth Revolution Group
distributed computing, TDD, and system design. What We're Looking For: Strong experience with Python, Spark, Scala, and Java in a commercial setting. Solid understanding of distributed systems (e.g. Hadoop, AWS, Kafka). Experience with SQL/NoSQL databases (e.g. PostgreSQL, Cassandra). Familiarity with orchestration tools (e.g. Airflow, Luigi) and cloud platforms (e.g. AWS, GCP). Passion for More ❯
Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness More ❯
with Spark. Experience building, maintaining, and debugging DBT pipelines. Strong proficiency in developing, monitoring, and debugging ETL jobs. Deep understanding of SQL and experience with Databricks, Snowflake, BigQuery, Azure, Hadoop, or CDP environments. Hands-on technical support experience, including escalation management and adherence to SLAs. Familiarity with CI/CD technologies and version control systems like Git. Expertise in More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯
Tableau) Machine Learning Fundamentals (e.g., Supervised, Unsupervised Learning) Machine Learning Algorithms (e.g., Regression, Classification, Clustering, Decision Trees, SVMs, Neural Networks) Model Evaluation and Validation Big Data Technologies (e.g., Spark, Hadoop - conceptual understanding) Database Querying (e.g., SQL) Cloud-based Data Platforms (e.g., AWS Sagemaker, Google AI Platform, Azure ML) Ethics in Data Science and AI Person Specification: Experience supporting data More ❯