Architecture, or similar roles. Strong programming skills in Python/Java/Scala. Expert in SQL and performance tuning for large datasets. Hands-on experience with Big Data ecosystems Hadoop, Spark, Kafka, Hive, HBase, etc. Strong experience with Cloud platforms (AWS/Azure/GCP) and services like: AWS: S3, Glue, EMR, Redshift, Lambda, Kinesis Azure: Data Factory, Synapse More ❯
ETL processes. Proficiency in Python. Experience with cloud platforms (AWS, Azure, or GCP). Knowledge of data modelling, warehousing, and optimisation. Familiarity with big data frameworks (e.g. Apache Spark, Hadoop). Understanding of data governance, security, and compliance best practices. Strong problem-solving skills and experience working in agile environments. Desirable: Experience with Docker/Kubernetes, streaming data (Kafka More ❯
data modelling, warehousing, and performance optimisation. Proven experience with cloud platforms (AWS, Azure, or GCP) and their data services. Hands-on experience with big data frameworks (e.g. Apache Spark, Hadoop). Strong knowledge of data governance, security, and compliance. Ability to lead technical projects and mentor junior engineers. Excellent problem-solving skills and experience in agile environments. Desirable: Experience More ❯
data modelling tools, data warehousing, ETL processes, and data integration techniques. · Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). · Strong knowledge of data workflow solutions like Azure Data Factory, Apache NiFi, Apache Airflow etc · Good knowledge of stream and batch processing solutions like Apache Flink, ApacheMore ❯
experience within either Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as Apache Spark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Reporting tools (e.g. Tableau, PowerBI, Qlik) GDPR and Government More ❯
least one major cloud provider (AWS, Azure, or GCP Strong experience building cloud data lakes, warehouses, and streaming architectures. Proficiency with data processing tools such as Spark, Databricks, Snowflake, Hadoop, or similar. Strong knowledge of ETL/ELT frameworks, API integration, and workflow orchestration tools (Airflow, Azure Data Factory, AWS Glue, etc. Deep understanding of relational and NoSQL databases More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
problem-solving skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if More ❯
City of London, London, United Kingdom Hybrid/Remote Options
Tenth Revolution Group
problem-solving skills, and the ability to think critically and analytically High experience in documentation and data dictionaries Knowledge of big data technologies and distributed computing frameworks such as Hadoop and Spark Excellent communication skills to effectively collaborate with cross-functional teams and present insights to business stakeholders Please can you send me a copy of your CV if More ❯
lancashire, north west england, united kingdom Hybrid/Remote Options
CHEP
such as Python, R, and SQL for data analysis and model development. Experience working with cloud computing platforms including AWS and Azure, and familiarity with distributed computing frameworks like Hadoop and Spark. Deep understanding of supply chain operations and the ability to apply data science methods to solve real-world business problems effectively. Strong foundational knowledge in mathematics and More ❯
in data modelling, data warehousing, and ETL development. Hands-on experience with Azure Data Factory, Azure Data Lake, and Azure SQL Database. Exposure to big data technologies such as Hadoop, Spark, and Databricks. Experience with Azure Synapse Analytics or Cosmos DB. Familiarity with data governance frameworks (e.g., GDPR, HIPAA). Experience implementing CI/CD pipelines using Azure DevOps More ❯
AWS Glue, S3, Lambda, Snowflake). Advanced knowledge of SQL and experience with modern data warehousing and database performance tuning. Familiarity with distributed data processing technologies (e.g., Apache Spark, Hadoop). More ❯
Banking/Financial Services domain is a plus. Preferred Qualifications Certifications in Pentaho, Big Data, or Cloud Platforms (AWS/GCP/Azure). Experience with Big Data technologies (Hadoop, Spark) and cloud data services. More ❯
success. The Skills You’ll Need: Experience in architecture & design and consulting services focused on enterprise solutions, data analytics platform, lake houses, data engineering, data processing, data warehousing, ETL, Hadoop & Big Data. Experience in defining and designing data governance, data management, and data security solutions for an enterprise across business verticals Experience on at least one of the More ❯
Python) and other database applications; · Understanding of PC environment and related software, including Microsoft Office applications; · Knowledge of data engineering using data stores including MS SQL Server, Oracle, NoSQL, Hadoop or other distributed data technologies. Experience using data visualization tools is a plus; · Experienced with Excel to aggregate, model, and manage large data sets; · Familiar with Microsoft Power BI More ❯
e.g. MS SQL, Oracle) NoSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J) Data exchange and processing skills (e.g. ETL, ESB, API) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
utilising strong communication and stakeholder management skills when engaging with customers Significant experience of coding in Python and Scala or Java Experience with big data processing tools such as Hadoop or Spark Cloud experience; GCP specifically in this case, including services such as Cloud Run, Cloud Functions, BigQuery, GCS, Secret Manager, Vertex AI etc. Experience with Terraform Prior experience More ❯
languages (Python, Bash) and programming languages (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the More ❯
commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process Video call - high-level overview and initial discussion In-person technical presentation - based on a provided example More ❯
Programming Involve in planning, designing and strategizing the roadmap around On-premise and cloud solutions. Experience in designing and developing real time data processing pipelines Expertise in working with Hadoop data platforms and technologies like Kafka, Spark, Impala, Hive and HDFS in multi-tenant environments Expert in Java programming ,SQL and shell script, DevOps Good understanding of current industry More ❯
Stevenage, Hertfordshire, South East, United Kingdom Hybrid/Remote Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
scale processing. The qualified candidate will have experience with database systems, Azure cloud storage, and significant exposure to or experience with modern big data processing tools (such as Spark, Hadoop, and Databricks). Candidates will be expected to be able to design and implement data solutions that use the following Azure services: Azure Cosmos DB, Azure SQL Database, Azure More ❯
Mandatory Skills: Java 17 and above, SQL Server, Oracle, MongoDB, CRUD Operations Data lake - Hadoop, Cloudera Domain: Capital Markets, finance background is mandatory Equal Opportunity Employer We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate More ❯
pre-systems Model and map scalable ELT/ETL flows (batch and streaming) Interface between Data Analysts and Data Scientists Enrich data and load into big data environment (ApacheHadoop on Cloudera) Profil echnical education (Computer Science HTL, Computer Science Degree, Data Science etc.) Experience in data modeling (relational databases and ApacheHadoop) Know-how of core technologies … such as SQL (MS SQL Server), ApacheHadoop (Kafka, NiFi, Flink, Scala and/or Java, Python) and Linux Showing interest in working with high-frequency data processing in a near-realtime environment Wir bieten Permanent position in a renowned IT company in Vienna You offer more than required? Perfect, we do too! Dependent on your qualifications and experience More ❯