data services. DataOps Knowledge: Experienced with CI/CD for data workflows, version control (e.g., Git), and automation in data engineering. Desirable: Experience with ApacheSpark Familiarity with machine learning frameworks and libraries Understanding of data governance and compliance Strong problem-solving and analytical skills Excellent communication and More ❯
proficiency - from writing queries to basic database operations Exposure to cloud infrastructure, ideally AWS Bonus: Experience with streaming/data tools such as Kafka, Spark, or Delta Lake Technology Stack & Environment You'll be working with: AWS, Kubernetes, Kafka, Argo Java 17, Python 3, HDF5 Distributed systems processing large More ❯
proficiency - from writing queries to basic database operations Exposure to cloud infrastructure, ideally AWS Bonus: Experience with streaming/data tools such as Kafka, Spark, or Delta Lake Technology Stack & Environment You'll be working with: AWS, Kubernetes, Kafka, Argo Java 17, Python 3, HDF5 Distributed systems processing large More ❯
london (city of london), south east england, United Kingdom
Selby Jennings
proficiency - from writing queries to basic database operations Exposure to cloud infrastructure, ideally AWS Bonus: Experience with streaming/data tools such as Kafka, Spark, or Delta Lake Technology Stack & Environment You'll be working with: AWS, Kubernetes, Kafka, Argo Java 17, Python 3, HDF5 Distributed systems processing large More ❯
architects to design scalable solutions. Ensure adherence to best practices in Databricks data engineering. Required Skills & Experience: Hands-on experience with Microsoft Databricks and Apache Spark. Strong knowledge of Git, DevOps integration, and CI/CD pipelines. Expertise in Unity Catalog for data governance and security. Proven ability to More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Robert Half
Fabric, AWS Redshift). Strong understanding of data modelling (relational, dimensional, NoSQL) and ETL/ELT processes . Experience with data integration tools (e.g., Apache Kafka, Talend, Informatica) and APIs . Familiarity with big data technologies (e.g., Hadoop, Spark) and real-time streaming Expertise in cloud security , data More ❯
through data utilization. Knowled g e/Experience Expertise in Commercial/Procurement Analytics. Experience in SAP (S/4 Hana). Experience with Spark, Databricks, or similar data processing tools. Stron g technical proficiency in data modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience … with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query More ❯
availability architectures Background creating reporting and reconciliation applications Undertsanding of OTC products with CDS, Interest Rate Swaps, Variance Swaps etc. needed Expertise with C++, Spark, Kafka would be hugely beneficial The key skillset is Python, but your expertise should extend to managing large datasets with PostgreSQL, and you should More ❯
as a Senior Data Scientist with Machine Learning expertise Strong understanding of ML models and observability tools Proficiency in Python and SQL Experience with Spark and Apache Airflow Knowledge of ML frameworks (PyTorch, TensorFlow, Scikit-Learn) Experience with cloud platforms, preferably AWS Experience with containerization technologies Useful information More ❯