would be an advantage Data visualization – Tools like Tableau Master data management (MDM) – Concepts and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL more »
Technical Discipline. Technical Expertise: Proficiency in SQL and experience with cloud-based data pipelines (Azure, AWS, GCP). Familiarity with big data tools like Hadoop and Spark. Data Management Skills: Hands-on experience working with large data sets, data pipelines, workflow management tools, and Azure cloud services. Exposure to more »
experience, Experience with cloud computing platforms such as AWS, Azure, or GCP (Google Cloud Platform). Familiarity with big data technologies such as ApacheHadoop, Spark, or Kafka. Experience deploying machine learning models in production environments. Contributions to open-source machine learning projects or research publications in relevant conferences more »
Master's preferred). Excellent problem-solving and communication. Can be advantageous if you have: Cloud platform experience (AWS, Azure, GCP), big data tech (Hadoop, Spark), containerization (Docker, Kubernetes), DevOps and CI/CD understanding. We regret to inform you that only shortlisted candidates will be notified/contacted. more »
learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role escalating areas more »
software engineer in a globally distributed team working with Scala, Java programming language (preferably both) Experience with big-data technologies Spark/Databricks and Hadoop/ADLS is a must Experience in any one of the cloud platform Azure (Preferred), AWS or Google Experience building data lakes and data more »
data solutions (AWS, Azure or GCP), engineering languages including Python, SQL, Java, and pipeline management tools e.g., Apache Airflow. Familiarity with big data technologies, Hadoop, or Spark. If this opportunity is of interest, or you know anyone who would be interested in this role, please send your CV and more »
know one or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud more »
through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from various more »
relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR, Snowflake, and Data Mesh principles • Team player • Proactive and resilient • A passion for social good Our Mission Statement: We are an more »
automation & configuration management; Ansible (plus Puppet, Saltstack), Terraform, CloudFormation; NodeJS, REACT/MaterialUI (plus Angular), Python, JavaScript; Big data processing and analysis, e.g. ApacheHadoop (CDH), Apache Spark; RedHat Enterprise Linux, CentOS, Debian or Ubuntu. Java 8, Spring framework (preferably Spring boot), AMQP RabbitMQ, Open source technologies; Experience of more »
Birmingham, West Midlands (County), United Kingdom
Workday
the following platforms: MySQL or Cassandra. Experience of developing and deploying applications into AWS or a private cloud. Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, UI Development. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our more »
Better Placed Ltd - A Sunday Times Top 10 Employer in 2023!
data warehousing technologies (e.g., Redshift, Snowflake) Strong analytical and problem-solving skills Experience with cloud platforms (AWS, Azure, or GCP) and big data frameworks (Hadoop, Spark) is a plus Data Engineer – London more »
limited to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency more »
solving skills and creativity. Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data more »
/Kotlin. Familiarity with Kotlin or willingness to learn. Industrial experience with AWS/GCP/Azure. Knowledge of common data products such as Hadoop, Spark, Airflow, PostgreSQL, S3, etc. Problem solving/troubleshooting skills and attention to detail. 👋 About Us High-quality data access and provisioning shouldn't more »
capabilities such as GitHub, Jenkins and UrbanCode methodologies, demonstrating engineering excellence and a passion for automation Additionally, data domain technology experience including: Teradata, Oracle, Hadoop, DB2 and familiarity with NoSQL databases such as Cassandra and HBase Expertise in Cloud Native technologies including networking & security is a plus Understanding how more »
Cloud ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributed computing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working with authentication and authorisation protocols/ more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a more »
Northampton, Northamptonshire, East Midlands, United Kingdom Hybrid / WFH Options
Dupen Ltd
Linux, APIs, infrastructure design load balancing, VMs, PostgreSQL, vector dbs. Senior ML Learning Engineer desirable skills: Version control (Git), computer vision libraries, Big Data (Hadoop, Spark), Cloud AWS, Google Cloud, Azure, and a knowledge of secure coding techniques PCI-DSS, PA-DSS, ISO27001. This is a fantastic opportunity to more »
experience in ETL technical design, automated data quality testing, QA, documentation, data warehousing, data modelling, and data wrangling. Proficiency in RDMS, ETL pipelines, Python, Hadoop, SQL, and a solid grasp of modern code development practices. Ability to manage multiple data and analytic systems with an awareness of decentralised data more »
a sense of trust with stakeholders. Preferred qualifications, capabilities and skills Experience with deep learning frameworks (pytorch, tensorflow) Experience with big-data technologies (Spark, Hadoop) or distributed computation frameworks (Dask, Modin) Hands on experience with Natural Language Processing (NLP) and Large Language Models (LLMs) Experience of creating and deploying more »
environments: Sybase ASE/IQ, Oracle or DB2 It would be great if you have: Experience in Cluster Computing and Big Data solutions: Spark, Hadoop, HDSF, XRS using public cloud Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the more »