tools like Apache NiFi, Talend, or custom scripts. Familiarity with ELT (Extract, Load, Transform) processes is a plus. Big Data Technologies : Familiarity with big data frameworks such as ApacheHadoop and Apache Spark, including experience with distributed computing and data processing. Cloud Platforms: Proficient in using cloud platforms (e.g., AWS, Google Cloud Platform, Microsoft Azure) for data storage, processing More ❯
London, England, United Kingdom Hybrid / WFH Options
Trudenty
time data pipelines for processing large-scale data. Experience with ETL processes for data ingestion and processing. Proficiency in Python and SQL. Experience with big data technologies like ApacheHadoop and Apache Spark. Familiarity with real-time data processing frameworks such as Apache Kafka or Flink. MLOps & Deployment: Experience deploying and maintaining large-scale ML inference pipelines into production More ❯
years of experience • Working experience in Palantir Foundry platform is must • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred). • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture. • Demonstrated experience in end-to-end data management, data modelling, and More ❯
with programming languages such as Python and R, and ML libraries (TensorFlow, PyTorch, scikit-learn). Hands-on experience with cloud platforms (Azure ML) and big data ecosystems (e.g., Hadoop, Spark). Strong understanding of CI/CD pipelines, DevOps practices, and infrastructure automation. Familiarity with database systems (SQL Server, Snowflake) and API integrations. Strong skills in ETL processes More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
5+ years of experience working on mission critical data pipelines and ETL systems. 5+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience with workflow orchestration tools More ❯
or CloudFormation. ·Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. ·Significant experience with Apache Spark or any other distributed data programming frameworks (, Hadoop, Beam) ·Familiarity with Databricks as a data and AI platform or the Lakehouse Architecture. ·Experience with data quality and/or and data lineage frameworks like Great Expectations, dbt More ❯
Redshift, EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public Holidays Private Health Care More ❯
work independently and as part of a team. Preferred Qualifications: Master's degree in Computer Science, Data Science, or a related field. Experience with big data technologies such as Hadoop, Spark, or Kafka. Experience with data visualization tools such as Power BI, Tableau, or Qlik. Certifications in Azure data and AI technologies. Benefits Salary: We offer a competitive, market More ❯
Strong knowledge of data architecture, data modeling, and ETL/ELT processes. Proficiency in programming languages such as Python, Java, or Scala. Experience with big data technologies such as Hadoop, Spark, and Kafka. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Excellent problem-solving skills and the ability to think strategically. Strong communication and interpersonal skills, with More ❯
deploy your pipelines and proven experience in their technologies You have experience in database technologies including writing complex queries against their (relational and non-relational) data stores (e.g. Postgres, Hadoop, Elasticsearch, Graph databases), and designing the database schemas to support those queries You have a good understanding of coding best practices and design patterns and experience with code and More ❯
Statistics, Maths or similar Science or Engineering discipline Strong Python and other programming skills (Java and/or Scala desirable) Strong SQL background Some exposure to big data technologies (Hadoop, spark, presto, etc.) NICE TO HAVES OR EXCITED TO LEARN: Some experience designing, building and maintaining SQL databases (and/or NoSQL) Some experience with designing efficient physical data More ❯
cloud environment and various platforms, including Azure, SQL Server. NoSQL databases is good to have. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with DataOps practices and tools, including CI/CD for data pipelines. Experience in medallion data architecture and other similar data modelling approaches. Experience with More ❯
cloud environment and various platforms, including Azure, SQL Server. NoSQL databases is good to have. Hands-on experience with data pipeline development, ETL processes, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with DataOps practices and tools, including CI/CD for data pipelines. Experience in medallion data architecture and other similar data modelling approaches. Experience with More ❯
Azure Functions, Azure SQL Database, HDInsight, and Azure Machine Learning Studio. Data Storage & Databases: SQL & NoSQL Databases: Experience with databases like PostgreSQL, MySQL, MongoDB, and Cassandra. Big Data Ecosystems: Hadoop, Spark, Hive, and HBase. Data Integration & ETL: Data Pipelining Tools: Apache NiFi, Apache Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI More ❯
London, England, United Kingdom Hybrid / WFH Options
NTT DATA
Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is a nice-to-have, providing a More ❯
NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g., Apache NiFi, Talend, Informatica). More ❯
you’ll also have: Experience with Relational Databases and Data Warehousing concepts. Experience with Enterprise ETL tools such as Informatica, Talend, DataStage, or Alteryx. Project experience with technologies like Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross-platform experience. Team building and leadership skills. You must be: Willing to work on client sites, potentially for extended periods. Willing to travel More ❯
Experience of Relational Databases and Data Warehousing concepts. Experience of Enterprise ETL tools such as Informatica, Talend, Datastage or Alteryx. Project experience using the any of the following technologies: Hadoop, Spark, Scala, Oracle, Pega, Salesforce. Cross and multi-platform experience. Team building and leading. You must be: Willing to work on client sites, potentially for extended periods. Willing to More ❯
and managing individuals. Strong Python and other programming skills (Spark/Scala desirable). Experience both using and building APIs. Strong SQL background. Exposure to big data technologies (Spark, Hadoop, Presto, etc.). Works well collaboratively, and independently, with a proven ability to form and manage strong relationships within the organisation and clients. Ability to support others and clients More ❯
promoting equality, diversity, and inclusion within all areas of responsibility and in particularly in Data & Analytics function. Expert proficiency in Python, R, SQL, and distributed computing frameworks (e.g., Spark, Hadoop). Advanced knowledge of data engineering tools (e.g., Airflow, Kafka, Snowflake, Databricks). Proficiency in machine learning frameworks (TensorFlow, PyTorch, Scikit-learn). Ability to implement robust data governance More ❯
London, England, United Kingdom Hybrid / WFH Options
Rein-Ton
to See From You Qualifications Proven experience as a Data Engineer with a strong background in data pipelines. Proficiency in Python, Java, or Scala, and big data technologies (e.g., Hadoop, Spark, Kafka). Experience with Databricks, Azure AI Services, and cloud platforms (AWS, Google Cloud, Azure). Solid understanding of SQL and NoSQL databases. Strong problem-solving skills and More ❯
promoting equality, diversity, and inclusion within all areas of responsibility and in particularly in Data & Analytics function. Expert proficiency in Python, R, SQL, and distributed computing frameworks (e.g., Spark, Hadoop). Advanced knowledge of data engineering tools (e.g., Airflow, Kafka, Snowflake, Databricks). Proficiency in machine learning frameworks (TensorFlow, PyTorch, Scikit-learn). Ability to implement robust data governance More ❯
Expertise : Strong foundation in data engineering, data analytics, or data science, with the ability to work effectively with various data types and sources. Experience using big data technologies (e.g. Hadoop, Spark, Hive) and database management systems (e.g. SQL and NoSQL). Graph Database Expertise : Deep understanding of graph database concepts, data modeling, and query languages (e.g., Cypher). Demonstrate More ❯