technical concepts to non-technical stakeholders. Team Player: Ability to work effectively in a collaborative team environment, as well as independently. Preferred Qualifications: Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Familiarity with AWS and its data services (e.g. S3, Athena, AWS Glue). Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Knowledge of containerization More ❯
Azure, or Google Cloud Platform (GCP). Strong proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or Oracle. Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
engineering, architecture, or platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design and implementation. Hands-on knowledge More ❯
Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to More ❯
in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
Science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL, or Java. Familiarity with data processing frameworks and tools (e.g., Apache Spark, Hadoop, Kafka) is a plus. Basic understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of database systems (e.g., MySQL, PostgreSQL, MongoDB) and data warehousing More ❯
Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is a nice-to-have, providing a More ❯
tools and libraries (e.g. Power BI) Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow Previous exposure to big data technologies (e.g. Hadoop, Spark) for large data processing Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server More ❯
7+ years in data architecture and solution design, and a history of large-scale data solution implementation. Technical Expertise : Deep knowledge of data architecture principles, big data technologies (e.g., Hadoop, Spark), and cloud platforms like AWS, Azure, or GCP. Data Management Skills : Advanced proficiency in data modelling, SQL/NoSQL databases, ETL processes, and data integration techniques. Programming & Tools More ❯
Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and/or NoSQL) Some experience with designing efficient physical data More ❯
Familiarity with Data Mesh, Data Fabric, and product-led data strategies. Expertise in cloud platforms (AWS, Azure, GCP, Snowflake). Technical Skills Proficiency in big data tools (Apache Spark, Hadoop). Programming knowledge (Python, R, Java) is a plus. Understanding of ETL/ELT, SQL, NoSQL, and data visualisation tools. Awareness of ML/AI integration into data architectures. More ❯
Docker Experience with NLP and/or computer vision Exposure to cloud technologies (eg. AWS and Azure) Exposure to Big data technologies Exposure to Apache products eg. Hive, Spark, Hadoop, NiFi Programming experience in other languages This is not an exhaustive list, and we are keen to hear from you even if you don't tick every box. The More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
Clear communicator, able to translate complex data concepts to cross-functional teams Bonus points for experience with: DevOps tools like Docker, Kubernetes, CI/CD Big data tools (Spark, Hadoop), ETL workflows, or high-throughput data streams Genomic data formats and tools Cold and hot storage management, ZFS/RAID systems, or tape storage AI/LLM tools to More ❯
flow diagrams, and process documentation. MINIMUM QUALIFICATIONS/SKILLS Proficiency in Python and SQL. Experience with cloud platforms like AWS, GCP, or Azure, and big data technologies such as Hadoop or Spark. Experience working with relational and NoSQL databases. Strong knowledge of data structures, data modeling, and database schema design. Experience supporting data science workloads with structured and unstructured More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
needs, entrepreneurial spirit Excellent verbal and written communication skills BS or MS degree in Computer Science or equivalent Nice to Have Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark Experience in developing Finance or HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience More ❯
projects Skills & Experience: Proven experience as a Lead Data Solution Architect in consulting environments Expertise in cloud platforms (AWS, Azure, GCP, Snowflake) Strong knowledge of big data technologies (Spark, Hadoop), ETL/ELT, and data modelling Familiarity with Python, R, Java, SQL, NoSQL, and data visualisation tools Understanding of machine learning and AI integration in data architecture Experience with More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience with orchestration tools (Chef, Puppet, Ansible) and ETL workflows (Airflow, Luigi) Familiarity with cloud More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
and with customers Preferred Experience Degree in Computer Science or equivalent practical experience Commercial experience with Spark, Scala, and Java (Python is a plus) Strong background in distributed systems (Hadoop, Spark, AWS) Skilled in SQL/NoSQL (PostgreSQL, Cassandra) and messaging tech (Kafka, RabbitMQ) Experience with orchestration tools (Chef, Puppet, Ansible) and ETL workflows (Airflow, Luigi) Familiarity with cloud More ❯
Experience with scripting languages like Python or KornShell. Knowledge of writing and optimizing SQL queries for large-scale, complex datasets. PREFERRED QUALIFICATIONS Experience with big data technologies such as Hadoop, Hive, Spark, EMR. Experience with ETL tools like Informatica, ODI, SSIS, BODI, or DataStage. We promote an inclusive culture that empowers Amazon employees to deliver the best results for More ❯
from you: Proficiency with the Python and SQL programming languages. Hands-on experience with cloud platforms like AWS, GCP, or Azure, and familiarity with big data technologies such as Hadoop or Spark. Experience working with relational databases and NoSQL databases. Strong knowledge of data structures, data modelling, and database schema design. Experience in supporting data science workloads and working More ❯