implementation of modern data architectures (ideally Azure, AWS. Microsoft Fabric, GCP, Data Factory) and modern data warehouse technologies (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products Expert problem-solving skills, including debugging More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
pipelines and ETL - informatica - Experience in SQL and database management systems - Knowledge of data modelling , warehousing concepts , and ETL processes - Experience with big data technologies and frameworks such as Hadoop, Hive, Spark. Programming experience in Python or Scala. - Demonstrated analytical and problem-solving skills. - Familiarity with cloud platforms (e.g Azure , AWS ) and their data related services - Proactive and detail More ❯
Lambda, S3, EC2, etc. Experience with coding languages like Python/Java/Scala Experience in maintaining data warehouse systems and working on large scale data transformation using EMR, Hadoop, Hive, or other Big Data technologies Experience mentoring and managing other Data Engineers, ensuring data engineering best practices are being followed Experience with hardware provisioning, forecasting hardware usage, and More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Forward Role
both relational databases (SQL Server, MySQL) and NoSQL solutions (MongoDB, Cassandra) Hands-on knowledge of AWS S3 and associated big data services Extensive experience with big data technologies including Hadoop and Spark for large-scale dataset processing Deep understanding of data security frameworks, encryption protocols, access management and regulatory compliance Proven track record building automated, scalable ETL frameworks and More ❯
from you: Proficiency with the Python and SQL programming languages. Hands-on experience with cloud platforms like AWS, GCP, or Azure, and familiarity with big data technologies such as Hadoop or Spark. Experience working with relational databases and NoSQL databases. Strong knowledge of data structures, data modelling, and database schema design. Experience in supporting data science workloads and working More ❯
skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
capability in documenting high-level designs, system workflows, and technical specifications. Deep understanding of architectural patterns (e.g., microservices, layered, serverless). Expert knowledge of big data tools (Kafka, Spark, Hadoop) and major cloud platforms (AWS, Azure, GCP) to inform technology recommendations. Proficiency in Java, C#, Python, or JavaScript/TypeScript. Experience with cloud platforms (AWS, Azure, Google Cloud) and More ❯
to translate complex technical problems into business solutions. 🌟 It’s a Bonus If You Have: Experience in SaaS, fintech, or software product companies. Knowledge of big data frameworks (Spark, Hadoop) or cloud platforms (AWS, GCP, Azure). Experience building and deploying models into production. A strong interest in AI, automation, and software innovation. 🎁 What’s in It for You More ❯
Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or More ❯
Reading, England, United Kingdom Hybrid / WFH Options
HD TECH Recruitment
Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or More ❯
slough, south east england, united kingdom Hybrid / WFH Options
HD TECH Recruitment
Services (e.g., Azure Data Factory, Synapse, Databricks, Fabric) Data warehousing and lakehouse design ETL/ELT pipelines SQL, Python for data manipulation and machine learning Big Data frameworks (e.g., Hadoop, Spark) Data visualisation (e.g., Power BI) Understanding of statistical analysis and predictive modelling Experience: 5+ years working with Microsoft data platforms 5+ years in a customer-facing consulting or More ❯
predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
london (city of london), south east england, united kingdom
BettingJobs
predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
london (city of london), south east england, united kingdom
HCLTech
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯