Manchester, England, United Kingdom Hybrid / WFH Options
CMSPI
with innovative ideas or examples of coding challenges or competitions. Highly desirable skills: Familiarity with Agile practices in a collaborative team environment. Exposure to big data tools, such as Hadoop and Spark for handling large-scale datasets. Experience with cloud platforms like Microsoft Azure. Benefits Comprehensive, payments industry training by in-house and industry experts. Excellent performance-based earning More ❯
to tackle business problems. Comfort with rapid prototyping and disciplined software development processes. Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.), data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras). Demonstrated ability to work on multi-disciplinary teams with diverse skillsets. Deploying machine learning models More ❯
visualization tools such as Tableau, Power BI, or similar to effectively present validation results and insights. Nice-to-Have Requirements Familiarity with big data tools and technologies, such as Hadoop, Kafka, and Spark. Familiarity with data governance frameworks and validation standards in the energy sector. Knowledge of distributed computing environments and model deployment at scale. Strong communication skills, with More ❯
and orchestration tools like Kubernetes * Understanding of CI/CD pipelines and DevOps practices * Knowledge of security best practices and data privacy considerations * Familiarity with big data technologies (e.g., Hadoop, Spark) is a plus * Basic understanding of machine learning concepts and their software engineering implications Key job responsibilities Key job responsibilities 1. Design and implement robust, scalable architectures for More ❯
cloud platforms (Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava Limited
RBAC, encryption) and ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Data Modelling: Designing dimensional, relational, and hierarchical data models. Scalability & Performance: Building fault-tolerant, highly available data architectures. Security & Compliance: Enforcing role-based access control (RBAC More ❯
design of data architectures that will be deployed You have experience in database technologies including writing complex queries against their (relational and non-relational) data stores (e.g. Postgres, ApacheHadoop, Elasticsearch, Graph databases), and designing the database schemas to support those queries You have a good understanding of coding best practices & design patterns and experience with code & data versioning More ❯
London, England, United Kingdom Hybrid / WFH Options
Made Tech Limited
different environments Owning the cloud infrastructure underpinning data systems through a DevOps approach Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them Ability to More ❯
educational background and relevant certifications. Skills: Strong foundation in statistics and programming (R/Python). Experience with data preparation, visualisation, and model building. Knowledge of big data platforms (Hadoop, Spark) and SQL/NoSQL databases. Experience: 3+ years of experience as a Data Scientist or in a related role. Typical Responsibilities: Develop and maintain data products. Data Engineering … an understanding of how big data is used, the big data ecosystem, and its major components. The data scientist must also demonstrate expertise with big data platforms, such as Hadoop and Spark, and master SQL and NoSQL. Leadership and professional development: Data scientists must be good problem solvers. They must understand the opportunity before implementing the solution, work in More ❯
predictive modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
Informatica, or Talend) and scripting (e.g., Python, Shell). Understanding of database security best practices and regulatory requirements (e.g., Experience with NoSQL or big data technologies (e.g., MongoDB, Cassandra, Hadoop). Qualifications: Familiarity with DevOps pipelines and Infrastructure as Code (e.g., Terraform, Liquibase). Certifications such as Microsoft Certified: Azure Database Administrator Associate, Oracle Database SQL Certified Associate, or … AWS Certified Database – Specialty Exposure to BI/reporting tools like Power BI, Tableau, or Looker. Experience with NoSQL or big data technologies (e.g., MongoDB, Cassandra, Hadoop). Active DoD 8570-compliant certification (e.g., Security+, CASP+, CISSP). Experience with cloud platforms such as AWS GovCloud, Azure Government. Familiarity with configuration management tools (e.g., Ansible, Puppet). Experience in More ❯
data points per day and create a highly available data processing and REST services to distribute data to different consumers across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and data governance platform Programming Languages: Java, Scala, Scripting Database Technologies: MongoDB, ElasticSearch, Cassandra, MemSQL, Sybase IQ/ASE Micro Service Technologies: REST, Spring … tech stacks SKILLS AND EXPERIENCE WE ARE LOOKING FOR Computer Science, Mathematics, Engineering or other related degree at bachelors level Java, Scala, Scripting, REST, Spring Boot, Jersey Kafka, Spark, Hadoop, MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE 3+ years of hands-on experience on relevant technologies ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and ideas More ❯
statistics. Experience with machine learning frameworks like TensorFlow, Keras, or PyTorch. Knowledge of data analysis and visualization tools (e.g., Pandas, NumPy, Matplotlib). Familiarity with big data technologies (e.g., Hadoop, Spark). Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Preferred Qualifications: Experience with natural language processing (NLP) techniques. More ❯
sensitive data management (privacy, consent, encryption) Experience working with customer data platforms such as Salesforce or similar Excellent communication and stakeholder engagement skills Desirable: Exposure to big data tools (Hadoop, Spark, Kafka) Knowledge of integrating ML models and AI into data platforms Industry certifications (e.g. CDMP, AWS, Azure) Experience with data visualisation tools (Power BI, Tableau, Looker) This role More ❯
e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience building data pipelines or automated ETL processes Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability More ❯
TB) data sets PREFERRED QUALIFICATIONS - Master's degree in statistics, data science, or an equivalent quantitative field - Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. - Experience programming to extract, transform and clean large (multi-TB) data sets - Experience with AWS technologies #J-18808-Ljbffr More ❯
Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS More ❯
in SQL, PySpark, and Python for data transformation and scripting. Hands-on experience with DevOps practices and managing CI/CD pipelines. Expertise in big data technologies such as Hadoop, Spark, and Kafka. Strong leadership skills, with experience in managing and developing high-performing teams. Familiarity with MuleSoft and systems thinking is a plus. Qualifications and Experience: Proven track More ❯
with sensitive data management (privacy, consent, encryption) Experience working with customer data platforms such as Salesforce or similar Excellent communication and stakeholder engagement skills Exposure to big data tools (Hadoop, Spark, Kafka) Knowledge of integrating ML models and AI into data platforms Industry certifications (e.g. CDMP, AWS, Azure) Experience with data visualisation tools (Power BI, Tableau, Looker) This role More ❯
Experience with unit testing, code profiling, and object-oriented programming Ability to work on multiple projects simultaneously and adapt to dynamic work environments Experience with Big Data platforms like Hadoop or Spark and knowledge of SQL is a plus. Proficiency with statistical programming and data visualization tools is highly desirable Continual learning attitude, with a focus on enhancing both More ❯
and critical-thinking skills. Excellent communication and collaboration skills. Experience in AI-driven products or solutions in industries like healthcare, finance, retail, etc. Exposure to big data technologies (e.g., Hadoop, Spark). Knowledge of model interpretability and explainability methods. Onsite Work Location London - England Working Model Work from Office. You should be willing to work on-site at our More ❯
Proficiency in GCP services, such as Compute Engine, Cloud Storage, BigQuery, Dataflow, and Kubernetes Engine Experience working with large-scale data processing, distributed computing, or cloud-based platforms (e.g., Hadoop, Spark, AWS, Azure) is highly desirable Familiarity with ESG investing principles and market trends is a plus Excellent problem-solving, analytical, and debugging skills Strong communication and interpersonal skills More ❯
with cloud platforms (AWS, Azure, GCP) and deploying models. Ability to use data visualization tools like Tableau or Power BI. Nice-to-Have: Familiarity with big data tools like Hadoop, Kafka, Spark. Knowledge of data governance and validation standards in energy. Experience with distributed computing and large-scale deployment. Strong communication skills for explaining complex validation results. At GE More ❯
London, England, United Kingdom Hybrid / WFH Options
Simon-Kucher & Partners
stand out: Implementation experience with Machine Learning models and applications Knowledge of cloud-based Machine Learning engines (AWS, Azure, Google, etc.) Experience with large scale data processing tools (Spark, Hadoop, etc.) Ability to query and program databases (SQL, No SQL) Experience with distributed ML frameworks (TensorFlow, PyTorch, etc.) Familiarity with collaborative software tools (Git, Jira, etc.) Experience with user More ❯