Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
Good to have Interview includes coding test. Job Description: Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written More ❯
such will have a large focus on training and building your skill set. You will have a genuine opportunity to build your knowledge around cloud and big data, particularly Hadoop and related data science technology. This is a fantastic opportunity to join one of the most innovative and exciting technology companies in London and build a career in the More ❯
Data Scientist - skills in statistics, physics, mathematics, Computer Science, Engineering, Data Mining, Big Data (Hadoop, Hive, MapReduce) This is an exceptional opportunity to work as a Data Scientist within a global analytics team, utilizing various big data technologies to develop complex behavioral models, analyze customer uptake of products, and foster new product innovation. Responsibilities include: Generating and reviewing large More ❯
their own Cloud estate. Responsibilities include: DevOps tooling/automation written with Bash/Python/Groovy/Jenkins/Golang Provisioning software/frameworks (Elasticsearch/Spark/Hadoop/PostgreSQL) Infrastructure Management - CasC, IasC (Ansible, Terraform, Packer) Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager Public Cloud, primarily GCP, but also AWS and Azure More ❯
and machine learning PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - Master's degree in math/statistics/engineering or other equivalent quantitative discipline, or PhD More ❯
home, there's nothing we can't achieve in the cloud. BASIC QUALIFICATIONS 7+ years of technical specialist, design and architecture experience 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience 7+ years of consulting, design and implementation of serverless distributed solutions experience 5+ years of software development with object-oriented language experience 3+ years of More ❯
of influencing C-suite executives and driving organizational change • Bachelor's degree, or 7+ years of professional or military experience • Experience in technical design, architecture and databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) • Experience implementing serverless distributed solutions • Software development experience with object-oriented languages and deep expertise in AI/ML PREFERRED QUALIFICATIONS • Proven ability to shape market More ❯
an asset: Testing performance with JMeter or similar tools Web services technology such as REST, JSON or Thrift Testing web applications with Selenium WebDriver Big data technology such as Hadoop, MongoDB, Kafka or SQL Network principles and protocols such as HTTP, TLS and TCP Continuous integration systems such as Jenkins or Bamboo What you can expect: At Global Relay More ❯
a relevant discipline such as Computer Science, Statistics, Applied Mathematics, or Engineering - Strong experience with Python and R - A strong understanding of a number of the tools across the Hadoop ecosystem such as Spark, Hive, Impala & Pig - An expertise in at least one specific data science area such as text mining, recommender systems, pattern recognition or regression models - Previous More ❯
in applied research PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - PhD Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your More ❯
neural deep learning methods and machine learning PREFERRED QUALIFICATIONS - Experience with popular deep learning frameworks such as MxNet and Tensor Flow - Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience More ❯
experience across AWS Glue, Lambda, Step Functions, RDS, Redshift, and Boto3. Proficient in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures including Data Lakehouse and Data Warehousing. A … tooling, and data governance including GDPR. Bonus Points For Expertise in Data Modelling, schema design, and handling both structured and semi-structured data. Familiarity with distributed systems such as Hadoop, Spark, HDFS, Hive, Databricks. Exposure to AWS Lake Formation and automation of ingestion and transformation layers. Background in delivering solutions for highly regulated industries. Passion for mentoring and enabling More ❯
should take pride in the software you produce. Apache Maven Desirable Skills and Experience Industry experience of any of the following technologies/skills would bebeneficial: Elasticsearch Docker ApacheHadoop, Kafka or Camel. Javascript Knowledge of both Windows and Linux operating systems. Kubernetes Nifi Nsq Apache Ignite Arango Why BAE Systems? This is a place where you'll be More ❯
London, England, United Kingdom Hybrid / WFH Options
Hays
focused environment, with a strong understanding of sustainability data. Programming skills in Python and/or R, with working knowledge of SQL and exposure to Big Data technologies (e.g. Hadoop). Solid background in financial services, preferably within asset management or investment analytics. Full-time office presence is required for the first 3 months. After this period, a hybrid More ❯
in software development with at least 2 server-side languages - Java being must have Proven experience with microservices architecture and scalable, distributed systems. Proficient in data technologies like MySQL , Hadoop , or Cassandra . Experience with batch processing , data pipelines , and data integrity practices. Familiarity with AWS services (e.g., RDS, Step Functions, EC2, Kinesis) is a plus. Solid understanding of More ❯
with rich textual content Experience of Java programming can independently prototype solutions to problems Experience with Recommender System, NLP and Machine Learning libraries Experience with big data technologies (e.g. Hadoop, MapReduce, Cascading, Scalding, Scala) is desirable but not required Unix skills Experience with start-up and R&D environments Strong presentation skills in communicating with experts and novices Language More ❯
techniques for LLMs PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy, etc. - Experience with large scale distributed systems such as Hadoop, Spark, etc. - PhD in math/statistics/engineering or other equivalent quantitative discipline - Experience with conducting research in a corporate setting - Experience in patents or publications at top More ❯
Should desirably have knowledge of modeling techniques (logit, GLM, time series, decision trees, random forests, clustering), statistical programming languages (SAS, R, Python, Matlab) and big data tools and platforms (Hadoop, Hive, etc.). Solid academic record. Strong computer skills. Knowledge of other languages is desirable. Get-up-and-go attitude, maturity, responsibility and strong work ethic. Strong ability to More ❯
Should desirably have knowledge of modeling techniques (logit, GLM, time series, decision trees, random forests, clustering), statistical programming languages (SAS, R, Python, Matlab) and big data tools and platforms (Hadoop, Hive, etc.). Solid academic record. Strong computer skills. Knowledge of other languages is desirable. Get-up-and-go attitude, maturity, responsibility and strong work ethic. Strong ability to More ❯
techniques for LLMs PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. - PhD in math/statistics/engineering or other equivalent quantitative discipline - Experience with conducting research in a corporate setting - Experience in patents or publications at top More ❯
configuration management/deployment tools such as Ansible and Red Hat Satellite. Experience with firewall and switch configuration, virtualization technologies like VMware, and software technologies such as Apache, Docker, Hadoop, MySQL, and network services (DHCP, DNS, LDAP) is essential. Experience working within governance frameworks like the National Cyber Security Centre guidance and the Government Digital Service Technology Code of More ❯
often (in days) to receive an alert: Create Alert Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are amongst the fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global More ❯
the latest tools and technologies to design, develop, and implement solutions that transform businesses and drive innovation. What will your job look like 4+ years of relevant experience in Hadoop with Scala Development Its mandatory that the candidate should have handled more than 2 projects in the above framework using Scala. Should have 4+ years of relevant experience in … handling end to end Big Data technology. Meeting with the development team to assess the company's big data infrastructure. Designing and coding Hadoop applications to analyze data collections. Creating data processing frameworks. Extracting data and isolating data clusters. Testing scripts and analyzing results. Troubleshooting application bugs. Maintaining the security of company data. Training staff on application use Good … platform & data development roles 5+ years of experience in big data technology with experience ranging from platform architecture, data management, data architecture and application architecture High Proficiency working with Hadoop platform including Spark/Scala, Kafka, SparkSQL, HBase, Impala, Hive and HDFS in multi-tenant environments Solid base in data technologies like warehousing, ETL, MDM, DQ, BI and analytical More ❯