roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level architecture design More ❯
MongoDB, Cassandra). • In-depth knowledge of data warehousing concepts and tools (e.g., Redshift, Snowflake, Google BigQuery). • Experience with big data platforms (e.g., Hadoop, Spark, Kafka). • Familiarity with cloud-based data platforms and services (e.g., AWS, Azure, Google Cloud). • Expertise in ETL tools and processes (e.g. More ❯
with SQL and database management systems (e.g., PostgreSQL, MySQL). Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data tools (e.g., Spark, Hadoop) is a plus. Prior experience in financial data analysis is highly preferred. Understanding financial datasets, metrics, and industry trends. Preferred Qualifications: Experience with API More ❯
Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative More ❯
Engineering: Proficiency in developing and maintaining real-time data pipelines. Experience with ETL processes, Python, and SQL. Familiarity with big data technologies like ApacheHadoop and Apache Spark. MLOps & Deployment: Experience deploying and maintaining ML inference pipelines. Proficiency with Docker and Kubernetes. Familiarity with AWS cloud platform. The Perfect More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
Methods
lakehouse architectures. - Knowledge of DevOps practices, including CI/CD pipelines and version control (eg, Git). - Understanding of big data technologies (eg, Spark, Hadoop) is a plus. More ❯
PySpark, Python, and SQL Proven experience with Palantir Foundry platform Strong background in enterprise data analytics and distributed computing frameworks (Spark/Hive/Hadoop preferred) Demonstrated ability to design end-to-end data management and transformation solutions Proficient in Spark SQL and familiar with cloud platforms such as More ❯
TensorRT and Experience with NVIDIA GPU hardware and software stack Understanding of HPC and AI workloads. Familiarity with BigData platforms and technologies, such as Hadoop or Spark. More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
Harnham
database systems (e.g., PostgreSQL, MySQL) Exposure to cloud platforms such as AWS, Azure, or GCP Experience with big data tools such as Spark and Hadoop Previous experience working with financial data, including understanding of financial metrics and industry trends More ❯
technologies Technical Skills Advanced machine learning and deep learning techniques Natural language processing Time series analysis and forecasting Reinforcement learning Big data technologies (Spark, Hadoop) Cloud infrastructure and containerization (Docker, Kubernetes) Version control and CI/CD practices More ❯
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
Unix/Linux environments and scripting • Familiar with data visualisation tools (e.g. QuickSight, Tableau, Looker, QlikSense) Desirable: • Experience with large-scale data technologies (Spark, Hadoop) • Exposure to microservices/APIs for data delivery • AWS certifications (e.g. Solutions Architect, Big Data Specialty) • Interest or background in Machine Learning This is More ❯
identifying and managing sales opportunities at client engagements An understanding of database technologies e.g. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. Hadoop, Mahout, Pig, Hive, etc.; An understanding of statistical modelling techniques e.g. Classification and regression techniques, Neural Networks, Markov chains, etc.; An understanding of cloud More ❯
clustering and classification techniques. Fluency in a programming language (Python, C, C++, Java, SQL). Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau). More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
Harnham
Functions, Synapse, etc.). Advanced SQL skills, including performance tuning and query optimization. Strong Python programming skills. Experience with big data tools such as Hadoop, Spark, and Kafka. Proficiency in CI/CD processes and version control. Solid experience with Terraform and Infrastructure as Code (IaC). Experience with More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Maxwell Bond
modelling Excellent communication and stakeholder management skills Fluency in a Programming Language (C++ or Python) Familiarity with Big Data Frameworks and Visualisation tools (Cassandra, Hadoop, Spark, Tableau) Desirable: Experience within an AI/ML-focused team or projects Familiarity with ML frameworks (e.g., scikit-learn, PyTorch, TensorFlow) Exposure to More ❯
engineering capabilities Experience working with Big Data Experience of data storage technologies: Delta Lake, Iceberg, Hudi Knowledge and understanding of Apache Spark, Databricks or Hadoop Ability to take business requirements and translate these into tech specifications Competence in evaluating and selecting development tools and technologies Sound like the role More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
X4 Technology
Java, with experience in AI frameworks (e.g., TensorFlow, PyTorch, Keras). Strong knowledge of data analysis, visualisation tools, and big data technologies (e.g., SQL, Hadoop, Spark). Experience with machine learning techniques, including NLP, recommendation algorithms, and computer vision. Familiarity with cloud platforms (Azure) for deploying AI solutions . More ❯
distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt More ❯
work hands-on in Python, R etc. The data scientist will work and collaborate with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc. Experience with NoSQL databases and familiarity with data visualisation tools will be of great advantage. What will you experience in terms More ❯
analysis, preferably in IoT or related fields. Proficient in programming languages such as Python and SQL. Experience with big data tools and platforms (e.g. Hadoop/Snowflake, Spark, Azure IoT Hub, AWS IoT). Please apply to the advert for more information. More ❯
. Great expertise in designing/building/implementing complex data pipelines. Vast experience with Databricks. Strong knowledge of distributed computing frameworks e.g Spark, Hadoop ecosystem. Highly experienced with cloud-based data platform AWS. Solid programming skills with Python, Spark. Demonstrated ability to engineer/architect solutions for Big More ❯
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and More ❯
and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should … do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These … migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯