of data governance, security, and regulatory compliance Strong leadership skills with the ability to influence senior stakeholders Hands-on coding experience with SQL, Python, Spark, or similar technologies Experience working in a large-scale enterprise environment with highly distributed data ecosystems Desirable skills/knowledge/experience: Azure/ More ❯
of data governance, security, and regulatory compliance Strong leadership skills with the ability to influence senior stakeholders Hands-on coding experience with SQL, Python, Spark, or similar technologies Experience working in a large-scale enterprise environment with highly distributed data ecosystems Desirable skills/knowledge/experience: Azure/ More ❯
experience with SQL, Python, R, or similar languages for data analysis. Familiarity with cloud platforms (e.g., AWS, Google Cloud) and big data tools (e.g., Spark, Snowflake). Exceptional leadership, project management, and interpersonal skills with a proven ability to manage and scale teams. Strong business acumen with the ability More ❯
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Kantar Media
technologies. Experienced in writing and running SQL and Bash scripts to automate tasks and manage data. Skilled in installing, configuring, and managing Hive on Spark with HDFS. Strong analytical skills with the ability to troubleshoot complex issues and analyze large volumes of text or binary data in Linux or More ❯
experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. Delta Lake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages Data domain and event data models Data Modelling More ❯
in the legal domain. Ability to communicate with multiple stakeholders, including non-technical legal subject matter experts. Experience with big data technologies such as Spark, Hadoop, or similar. Experience conducting world-leading research, e.g. by contributions to publications at leading ML venues. Previous experience working on large-scale data More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
synthesis prediction, including using QM toolkits (e.g., PSI4, Orca, Gaussian). Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
with toolkits such as BioPython AI Protein Dynamics Integrative structural modelling Experience with data curation and processing from heterogeneous sources; familiarity with tools like ApacheSpark or Hadoop. Proficiency with cloud platforms (AWS, GCP, Azure). Familiarity with major machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch). More ❯
grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML More ❯
grade ML models Solid grasp of MLOps best practices Confident speaking to technical and non-technical stakeholders 🛠️ Tech you’ll be using: Python, SQL, Spark, R MLflow, vector databases GitHub/GitLab/Azure DevOps Jira, Confluence 🎓 Bonus points for: MSc/PhD in ML or AI Databricks ML More ❯
with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge More ❯
with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge More ❯
learning algorithms and general statistical methodologies and theory. Basic knowledge of AB testing and design of experiment. Advanced Python and SQL skills, experience using Spark for processing large datasets. Understanding of software product development processes and governance, including CI/CD processes and release and change management. Familiarity with More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
Python); Software collaboration and revision control (e.g., Git or SVN). Desired skills and experiences: ElasticSearch/Kibana Cloud computing (e.g., AWS) Hadoop/Spark etc. Graph Databases Educational level: Master Degree More ❯
Familiarity with cloud platforms like AWS, GCP, or Azure. Strong written and spoken English skills. Bonus Experience: Experience with big data tools (e.g., Hadoop, Spark) and distributed computing. Knowledge of NLP techniques and libraries. Familiarity with Docker, Kubernetes, and deploying machine learning models in production. Experience with visualization tools More ❯
and their techniques. Experience with data science, big data analytics technology stack, analytic development for endpoint and network security, and streaming technologies (e.g., Kafka, Spark Streaming, and Kinesis). • Strong sense of ownership combined with collaborative approach to overcoming challenges and influencing organisational change. Amazon is committed to a More ❯
strategically about business, product, and technical challenges in an enterprise environment - Extensive hands-on experience with data platform technologies, including at least three of: Spark, Hadoop ecosystem, orchestration frameworks, MPP databases, NoSQL, streaming technologies, data catalogs, BI and visualization tools - Proficiency in at least one programming language (e.g., Python More ❯