in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog More ❯
functional teams . Preferred Skills High-Performance Computing (HPC) and AI workloads for large-scale enterprise solutions. NVIDIA CUDA, cuDNN, TensorRT experience for deep learning acceleration. Big Data platforms (Hadoop, Spark) for AI-driven analytics in professional services. Pls share CV at payal.c@hcltech.com More ❯
English is required. Preferred Skills: Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative Culture: Be part of a More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
orchestration tools like Apache Airflow or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. Scala is mandated in some More ❯
Engineering, Mathematics, Finance, etc. Proficiency in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join Us? Work with large-scale More ❯
with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases). − Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage − Familiarity with data warehousing solutions and cloud data platforms. − Background in building applications wrapped around AI/LLM/mathematical models − Ability to scale up More ❯
Great experience as a Data Engineer Experience with Spark, Databricks, or similar data processing tools. Proficiency in working with the cloud environment and various software’s including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar), SQL and Spark. Proven ability to develop data pipelines (ETL/ELT). Strong inclination to learn and adapt to new More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Widen the Net Limited
team! You will develop scalable data pipelines, ensure data quality, and support business decision-making with high-quality datasets. -Work across technology stack: SQL, Python, ETL, Big Query, Spark, Hadoop, Git, Apache Airflow, Data Architecture, Data Warehousing -Design and develop scalable ETL pipelines to automate data processes and optimize delivery -Implement and manage data warehousing solutions, ensuring data integrity More ❯
QlikSense . Ability to create well-documented, scalable, and reusable data solutions. Strong Pre sales experience within a consulting environment Desirable Skills Experience with big data technologies such as Hadoop, MapReduce, or Spark. Exposure to microservice-based data APIs . Familiarity with data solutions in other public cloud platforms . AWS certifications (e.g., Solutions Architect Associate , Big Data Specialty More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Mars
help shape our digital platforms 🧠 What we’re looking for Proven experience as a Data Engineer in cloud environments (Azure ideal) Proficiency in Python, SQL, Spark, Databricks Familiarity with Hadoop, NoSQL, Delta Lake Bonus: Azure Functions, Logic Apps, Django, CI/CD tools 💼 What you’ll get from Mars A competitive salary & bonus Hybrid working with flexibility built in More ❯
.NET code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson McCade
Knowledge of CI/CD processes and infrastructure-as-code. • Eligible for SC clearance (active clearance highly advantageous). Desirable Skills • Exposure to large data processing frameworks (e.g., Spark, Hadoop). • Experience deploying data via APIs and microservices. • AWS certifications (Solution Architect Associate, Data Analytics Speciality, etc.). • Experience in public sector programmes or government frameworks. Package & Benefits More ❯
Azure Functions. Strong knowledge of scripting languages (e.g., Python, Bash, PowerShell) for automation and data transformation. Proficient in working with databases, data warehouses, and data lakes (e.g., SQL, NoSQL, Hadoop, Redshift). Familiarity with APIs and web services for integrating external systems and applications into orchestration workflows. Hands-on experience with data transformation and ETL (Extract, Transform, Load) processes. More ❯
familiarity with DevOps tools and concepts – e.g. Kubernetes , Git-based CI/CD , cloud infrastructure (AWS/GCP/Azure). Bonus: Exposure to tools like Elasticsearch/Kibana , Hadoop/HBase , OpenSearch , or VPN/proxy architectures. Strong grasp of software security principles , system performance optimisation, and infrastructure reliability. Experience working on large-scale , production-grade systems with More ❯
protection directions. At least one successful cross-system full-link data fortification implementation experience. Familiarity with at least one language, such as Java, Go, or Python, and knowledge of Hadoop/Spark ecosystems in the big data domain. Preference for candidates with multinational corporate experience in data security and over 3 years of experience in international data security, encryption More ❯
Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or … hybrid cloud environment Ability to do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be … to support all developers in migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯