Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with More ❯
Experience with feature stores (e.g., Feast, Tecton). Knowledge of distributed training (e.g., Horovod, distributed PyTorch). Familiarity with big data tools (e.g., Spark, Hadoop, Beam). Understanding of NLP, computer vision, or time series analysis techniques. Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases). Experience with More ❯
infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT processes, SQL, NoSQL databases is a More ❯
Core. Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: Apache Spark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API Gateway, or MuleSoft. Middleware More ❯
business analytics and insight initiatives. BASIC QUALIFICATIONS - Bachelor's degree in computer science or equivalent - 3+ years of big data technologies such as AWS, Hadoop, Spark, Pig, Hive, Lucene/SOLR or Storm/Samza experience - Experience with diverse data formats: Parquet, JSON, big data formats, and table formats More ❯
London, England, United Kingdom Hybrid / WFH Options
Methods
lakehouse architectures. - Knowledge of DevOps practices, including CI/CD pipelines and version control (eg, Git). - Understanding of big data technologies (eg, Spark, Hadoop) is a plus. Seniority level Seniority level Mid-Senior level Employment type Employment type Contract Job function Job function Information Technology Referrals increase your More ❯
City of London, England, United Kingdom Hybrid / WFH Options
Staging It
data modelling (relational, NoSQL) and ETL/ELT processes. Experience with data integration tools (e.g., Kafka, Talend) and APIs. Familiarity with big data technologies (Hadoop, Spark) and real-time streaming. Expertise in cloud security, data governance, and compliance (GDPR, HIPAA). Strong SQL skills and proficiency in at least More ❯
ETL pipelines Proficiency in SQL Experience with scripting languages like Python or KornShell Unix experience Troubleshooting data and infrastructure issues Preferred Qualifications Experience with Hadoop, Hive, Spark, EMR Experience with ETL tools like Informatica, ODI, SSIS, BODI, DataStage Knowledge of distributed storage and computing systems Experience with reporting and More ❯
years of experience with data modeling, data warehousing, ETL/ELT pipelines and BI tools. - Experience with cloud-based big data technology stacks (e.g., Hadoop, Spark, Redshift, S3, Glue, SageMaker etc.) - Knowledge of data management and data storage principles. - Experience in at least one modern object-oriented programming language More ❯
Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative More ❯
Experience in commodities markets or broader financial markets. Knowledge of quantitative modeling, risk management, or algorithmic trading. Familiarity with big data technologies like Kafka, Hadoop, Spark, or similar. Why Work With Us? Impactful Work: Directly influence the profitability of the business by building technology that drives trading decisions. Innovative More ❯
databases (e.g., MongoDB, Cassandra) Experience with AWS S3 and other AWS services related to big data solutions Hands-on experience with big data tooling (Hadoop, Spark, etc.) for processing large datasets In-depth understanding of data security best practices, including encryption, access controls, and compliance standards Familiarity with ETL More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps More ❯
London, England, United Kingdom Hybrid / WFH Options
Aecom
and SQL.+ In-depth experience with data manipulation and visualization libraries (e.g., Pandas, NumPy, Matplotlib, etc.).+ Solid understanding of big data technologies (e.g., Hadoop, Spark) and cloud platforms (AWS, Azure, Google Cloud).+ Strong expertise in the full data science lifecycle: data collection, preprocessing, model development, deployment, and More ❯
or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. More ❯
or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. More ❯
or Cloud Composer. Hands-on experience with one or more of the following GCP data processing services: Dataflow (Apache Beam), Dataproc (Apache Spark/Hadoop), or Composer (Apache Airflow). Proficiency in at least one scripting/programming language (e.g., Python, Java, Scala) for data manipulation and pipeline development. More ❯
Cambourne, England, United Kingdom Hybrid / WFH Options
Remotestar
computing frameworks such as Spark, KStreams, Kafka. Experience with Kafka and streaming frameworks. Understanding of monolithic vs. microservice architectures. Familiarity with Apache ecosystem including Hadoop modules (HDFS, YARN, HBase, Hive, Spark) and Apache NiFi. Experience with containerization and orchestration tools like Docker and Kubernetes. Knowledge of time-series or More ❯
London, England, United Kingdom Hybrid / WFH Options
Anson McCade
Azure, GCP, Snowflake). Understanding of Data Mesh, Data Fabric, and product-led data strategies. Technical Knowledge: Familiarity with big data technologies (Apache Spark, Hadoop). Knowledge of programming languages such as Python, R, or Java. Experience with ETL/ELT processes, SQL, NoSQL databases, and DevOps principles. Understanding More ❯
Lake activities. The candidate should have industry experience (preferably in Financial Services) in navigating enterprise Cloud applications using distributed computing frameworks as Apache Spark, Hadoop, Hive. Working knowledgeoptimizing database performance, scalability, ensuring data security and compliance. Education & Preferred Qualifications Bachelor’s/Master's Degree in a Computer Science More ❯
infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog). Familiarity with big data technologies like Apache Spark, Hadoop, or similar. ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores Leadership & Strategy: Lead Data More ❯
field Strong proficiency in Python (PySpark, Pandas) and SQL Experience with cloud platforms (AWS, GCP, or Azure) Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) is a plus How to Apply: Fill out the application form here: https://docs.google.com/forms/d/e/ More ❯
in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join More ❯
in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join More ❯
in Python, SQL , and one or more: R, Java, Scala Experience with relational/NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with big data tools (Hadoop, Spark, Kafka), cloud platforms (Azure, AWS, GCP), and workflow tools (Airflow, Luigi) Bonus: experience with BI tools , API integrations , and graph databases Why Join More ❯