skills and attention to detail. Strong communication and collaboration skills. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Experience with big data technologies like Hadoop, Spark, or Databricks. Familiarity with machine learning and AI concepts. If you encounter any suspicious mail, advertisements, or persons who offer jobs at Wipro, please email us at helpdesk.recruitment More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
needs US citizenship and an active Secret clearance; will also accept TS/SCI or TS/SCI with CI Polygraph Desired Experience: Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Work could possibly require some on-call work. The Swift Group and Subsidiaries are an Equal More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Coeo Ltd
Proficiency in designing and implementing ETL/ELT processes to integrate data from different sources. Proficiency in SQL and Python for data analysis, transformation and machine learning. Familiarity with Hadoop and Spark for large-scale data processing. Good knowledge of tools like Power BI for data visualisation and presentation. Understanding of statistical methods and their application in data analysis More ❯
cloud platforms (Azure, AWS, GCP) Hands-on experience with SQL, Data Pipelines, Data Orchestration and Integration Tools Experience in data platforms on premises/cloud using technologies such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava Limited
RBAC, encryption) and ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Data Modelling: Designing dimensional, relational, and hierarchical data models. Scalability & Performance: Building fault-tolerant, highly available data architectures. Security & Compliance: Enforcing role-based access control (RBAC More ❯
design of data architectures that will be deployed You have experience in database technologies including writing complex queries against their (relational and non-relational) data stores (e.g. Postgres, ApacheHadoop, Elasticsearch, Graph databases), and designing the database schemas to support those queries You have a good understanding of coding best practices & design patterns and experience with code & data versioning More ❯
London, England, United Kingdom Hybrid / WFH Options
Made Tech Limited
different environments Owning the cloud infrastructure underpinning data systems through a DevOps approach Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them Ability to More ❯
predictive modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
Qualifications Experience in NLP, computer vision, time-series, or recommender systems. Knowledge of data privacy and ethical AI frameworks (e.g., GDPR compliance). Experience with big data frameworks (Spark, Hadoop). Previous experience working in highly regulated industries (e.g., finance, healthcare, public sector). What We Offer Competitive salary and performance-based bonuses. Flexible working hours and hybrid/ More ❯
data points per day and create a highly available data processing and REST services to distribute data to different consumers across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and data governance platform Programming Languages: Java, Scala, Scripting Database Technologies: MongoDB, ElasticSearch, Cassandra, MemSQL, Sybase IQ/ASE Micro Service Technologies: REST, Spring … tech stacks SKILLS AND EXPERIENCE WE ARE LOOKING FOR Computer Science, Mathematics, Engineering or other related degree at bachelors level Java, Scala, Scripting, REST, Spring Boot, Jersey Kafka, Spark, Hadoop, MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE 3+ years of hands-on experience on relevant technologies ABOUT GOLDMAN SACHS At Goldman Sachs, we commit our people, capital and ideas More ❯
predictive modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) Seniority level Seniority level Entry level Employment type Employment type Full-time Job function Job function Information Technology, Consulting, and Management Industries Staffing and Recruiting, Appliances, Electrical More ❯
and disseminate significant amounts of information with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge of Microsoft Fabric is More ❯
and disseminate significant amounts of information with attention to detail and accuracy. Adept at queries, report writing, and presenting findings. Experience working with large datasets and distributed computing tools (Hadoop, Spark, etc.) Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.). Experience with data profiling tools and processes. Knowledge of Microsoft Fabric is More ❯
concepts to non-technical stakeholders. Preferred Qualifications: Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Familiarity with big data processing frameworks (e.g., Hadoop, Spark) and distributed computing. Certifications in relevant technologies or project management (e.g., PMP, Scrum Master). More ❯
with sensitive data management (privacy, consent, encryption) Experience working with customer data platforms such as Salesforce or similar Excellent communication and stakeholder engagement skills Exposure to big data tools (Hadoop, Spark, Kafka) Knowledge of integrating ML models and AI into data platforms Industry certifications (e.g. CDMP, AWS, Azure) Experience with data visualisation tools (Power BI, Tableau, Looker) This role More ❯
and critical-thinking skills. Excellent communication and collaboration skills. Experience in AI-driven products or solutions in industries like healthcare, finance, retail, etc. Exposure to big data technologies (e.g., Hadoop, Spark). Knowledge of model interpretability and explainability methods. Onsite Work Location London - England Working Model Work from Office. You should be willing to work on-site at our More ❯
London, England, United Kingdom Hybrid / WFH Options
Simon-Kucher & Partners
stand out: Implementation experience with Machine Learning models and applications Knowledge of cloud-based Machine Learning engines (AWS, Azure, Google, etc.) Experience with large scale data processing tools (Spark, Hadoop, etc.) Ability to query and program databases (SQL, No SQL) Experience with distributed ML frameworks (TensorFlow, PyTorch, etc.) Familiarity with collaborative software tools (Git, Jira, etc.) Experience with user More ❯
TB) data sets PREFERRED QUALIFICATIONS - Master's degree in statistics, data science, or an equivalent quantitative field - Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. - Experience programming to extract, transform and clean large (multi-TB) data sets - Experience with AWS technologies Amazon is an equal opportunities employer. We believe passionately that employing a More ❯
Great experience as a Data Engineer Experience with Spark, Databricks, or similar data processing tools. Proficiency in working with the cloud environment and various software’s including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar), SQL and Spark. Proven ability to develop data pipelines (ETL/ELT). Strong inclination to learn and adapt to new More ❯
Great experience as a Data Engineer Experience with Spark, Databricks, or similar data processing tools. Proficiency in working with the cloud environment and various software’s including SQL Server, Hadoop, and NoSQL databases. Proficiency in Python (or similar), SQL and Spark. Proven ability to develop data pipelines (ETL/ELT). Strong inclination to learn and adapt to new More ❯
statistical analysis. Expertise in Python, with proficiency in ML and NLP libraries such as Scikit-learn, TensorFlow, Faiss, LangChain, Transformers and PyTorch. Experience with big data tools such as Hadoop, Spark, and Hive. Familiarity with CI/CD and MLOps frameworks for building end-to-end ML pipelines. Proven ability to lead and deliver data science projects in an More ❯
and relational database technologies (PostgreSQL, MySQL, RDS) US citizenship and an active TS/SCI with Polygraph security clearance required Desired Experience: Experience with distributed databases and streaming tools (Hadoop, Spark, Yarn, Hive, Trino) Experience with Remote Desktop Protocol (RDP) technologies Experience with data access control, specifically Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) Familiarity More ❯
London, England, United Kingdom Hybrid / WFH Options
EDB
years of experience in Data Warehouse solution with an emphasis on on Greenplum In depth knowledge of Greenplum’s parallel processing and integration with big data frameworks such as Hadoop Strong understanding of data warehousing concepts and analytical query design Strong understanding of Kubernetes architecture (nodes, pods, services, deployments, etc.), knowledge of Kubernetes API objects and their relationships, proficiency More ❯