implementation of modern data architectures (ideally Azure, AWS. Microsoft Fabric, GCP, Data Factory) and modern data warehouse technologies (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Ability to collaborate within and across teams of different technical knowledge to support delivery and educate end users on data products Expert problem-solving skills, including debugging More ❯
and model deployment. - Experience with Infrastructure as Code (IaC) by tools such as CDK. - Experience with streaming data processing and real-time analytics. - Experience with big data technologies (e.g., Hadoop, Spark, Hive). Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during More ❯
of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and More ❯
modern data architectures, Lambda type architectures - Proficiency in writing and optimizing SQL - Knowledge of AWS services including S3, Redshift, EMR, Kinesis and RDS. - Experience with Open Source Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) - Ability to write code in Python, Ruby, Scala or other platform-related Big data technology - Knowledge of professional software engineering practices & best practices for More ❯
skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
skills. Experience working in BFSI or enterprise-scale environments is a plus. Preferred: Exposure to cloud platforms (AWS, Azure, GCP) and their data services. Knowledge of Big Data platforms (Hadoop, Spark, Snowflake, Databricks). Familiarity with data governance and data catalog tools. More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
vision. Hands-on with data engineering, model deployment (MLOps), and cloud platforms (AWS, Azure, GCP). Strong problem-solving, algorithmic, and analytical skills. Knowledge of big data tools (Spark, Hadoop) is a plus. More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
london (city of london), south east england, united kingdom
HCLTech
DataStage, Talend and Informatica. Ingestion mechanism like Flume & Kafka. Data modelling – Dimensional & transactional modelling using RDBMS, NO-SQL and Big Data technologies. Data visualization – Tools like Tableau Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming More ❯
to translate complex technical problems into business solutions. 🌟 It’s a Bonus If You Have: Experience in SaaS, fintech, or software product companies. Knowledge of big data frameworks (Spark, Hadoop) or cloud platforms (AWS, GCP, Azure). Experience building and deploying models into production. A strong interest in AI, automation, and software innovation. 🎁 What’s in It for You More ❯
as well as programming languages such as Python, R, or similar. Strong experience with machine learning frameworks (e.g., TensorFlow, Scikit-learn) as well as familiarity with data technologies (e.g., Hadoop, Spark). About Vixio: Our mission is to empower businesses to efficiently manage and meet their regulatory obligations with our unique combination of human expertise and Regulatory Technology (RegTech More ❯
of technical specialist, design and architecture experience - 7+ years of external or internal customer facing, complex and large scale project management experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating system experience PREFERRED QUALIFICATIONS - AWS experience preferred, with proficiency in a wide More ❯
with some of the brightest technical minds in the industry today. BASIC QUALIFICATIONS - 10+ years of technical specialist, design and architecture experience - 10+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 10+ years of consulting, design and implementation of serverless distributed solutions experience - Australian citizen with ability to obtain security clearance. PREFERRED QUALIFICATIONS - AWS Professional level More ❯
Strong grasp of MLOps/LLMOps principles, including CI/CD for ML, model monitoring, and governance frameworks. Proficiency with large-scale data processing and storage technologies (SQL, Spark, Hadoop) is a plus. Excellent stakeholder management and communication skills, with proven ability to translate complex AI concepts for diverse audiences. Connect to your business - Technology and Transformation Distinctive thinking More ❯
home, there's nothing we can't achieve in the cloud. BASIC QUALIFICATIONS - 5+ years of experience in cloud architecture and implementation - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 5+ Experience delivering cloud projects or cloud based solutions - Able to communicate effectively in English, within technical and business settings. - Bachelor's degree in Business More ❯
native tech stack in designing and building data & AI solutions Experience with data modeling, ETL processes, and data warehousing Knowledge of big data tools and frameworks such as Spark, Hadoop, or Kafka More ❯
as Java, TypeScript, Python, and Go Web libraries and frameworks such as React and Angular Designing, building, and maintaining CI/CD pipelines Big data technologies, such as NiFi, Hadoop, Spark Cloud and containerization technologies such as AWS, OpenShift, Kubernetes, Docker DevOps methodologies, such as infrastructure as code and GitOps Database technologies, e.g. relational databases, Elasticsearch, Mongo Why join More ❯
Familiarity with and experience of using UNIX Knowledge of CI toolsets Good client facing skills and problem solving aptitude DevOps knowledge of SQL Oracle DB Postgres ActiveMQ Zabbix Ambari Hadoop Jira Confluence BitBucket ActiviBPM Oracle SOA Azure SQLServer IIS AWS Grafana Oracle BPM Jenkins Puppet CI and other cloud technologies. All profiles will be reviewed against the required skills More ❯
Agile working practices CI/CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spar k , Kafka, etc.) Microservice and cloud native architecture Desirable: Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate excellent team-working skills. Strong More ❯
build scalable data infrastructure, develop machine learning models, and create robust solutions that enhance public service delivery. Working in classified environments, you'll tackle complex challenges using tools like Hadoop, Spark, and modern visualisation frameworks while implementing automation that drives government efficiency. You'll collaborate with stakeholders to transform legacy systems, implement data governance frameworks, and ensure solutions meet … R; Collaborative, team-based development; Cloud analytics platforms e.g. relevant AWS and Azure platform services; Data tools hands on experience with Palantir ESSENTIAL; Data science approaches and tooling e.g. Hadoop, Spark; Software development methods and techniques e.g. Agile methods such as SCRUM; Software change management, notably familiarity with git; Public sector best practice guidance, e.g. ITIL, OGC toolkit. Additional More ❯
not help here. Interview includes coding test. Job Description: = Scala/Spark Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) Experience in Big data technologies, Real Time data processing platform (Spark Streaming) experience would be an advantage. Consistently demonstrates clear and concise written More ❯
government security clearance. Preferred technical and professional experience Familiarity with containerization and orchestration tools (Docker, Kubernetes) Experience with microservices architecture and RESTful APIs Knowledge of big data technologies (e.g., Hadoop, Spark) Understanding of DevSecOps practices and tools IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive More ❯