maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
maintain high volume Java or Scala based data processing jobs using industry standard tools and frameworks in the Hadoop ecosystem, such as Spark, Kafka, Hive, Impala, Avro, Flume, Oozie, and Sqoop Design and maintain schemas in our analytics database. Excellent in writing efficient SQL for loading and querying data. … technologies, languages, and techniques in the rapidly evolving world of high-volume data processing. Technologies We Use: Development languages/frameworks : Java/Scala, Apache Spark, Kafka, Vertica, JavaScript (React/Redux), MicroStrategy Amazon : EMR, Step Functions, SQS, LaMDA and AWS cloud-native architectures DevOps Tools : Terraform or Cloud more »
Expertise in machine learning tools and packages like Scikit-learn, SciPy, Tensorflow, Keras, etc. Expertise with large-scale distributed data processing systems such as Hive, Hadoop, Spark, etc. Strong SQL skills and experience with data manipulation tools like Pandas, Numpy, etc. Familiarity with different Transformer based architectures and their more »
like engineering, mathematics, statistics or operations research is a must. Masters in a relevant field will be an added advantage.Preferred Qualifications:Knowledge of PySpark, Hive & other big data tools & techniques.Experience with data visualization tools, such as Tableau, Power BI, or similar tools, to efficiently communicate complex analyses and insights.QualificationsWe more »
and troubleshooting of cloud systems.- Operational experience running a 24x7 production infrastructure at scale.- Proficiency working with data structures, schemas, and technologies like Hadoop, Hive, Redis, and MySQL- Experience in using cloud-native services like GKE, EKS, AWS/GCP load balancing, AWS/GCP cloud storage platforms (S3 more »
in Unix and shell scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big … Data Space (Hive, Impala, Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication more »
technical environment and/or comparable experienceSignifannt years of proven experience in Java, REST APIs, Spring Boot Framework, ReactJS, Redis, OpenShift, Elastic Stack, Kafka, Hive, Spark, UNIX/Linux and Oracle DBExperience with a statistical programming language (Python or R) and experience with libraries specifically for Machine Learning or more »
technical environment and/or comparable experienceSignifannt years of proven experience in Java, REST APIs, Spring Boot Framework, ReactJS, Redis, OpenShift, Elastic Stack, Kafka, Hive, Spark, UNIX/Linux and Oracle DBExperience with a statistical programming language (Python or R) and experience with libraries specifically for Machine Learning or more »
data management (MDM) – Concepts and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB more »
Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, ApacheHive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based more »
the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud platforms such as AWS, Azure more »
Proven experience as a Lead Big Data Engineer with excellent knowledge of Big Data -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
excellent knowledge of Big Data -Great understanding of Cloud e.g. Azure and or AWS -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as more »
Manchester, England, United Kingdom Hybrid / WFH Options
Roku
Spark, Python and Java a lot) Can work with large scale computing frameworks, data analysis systems and modeling environments. Examples include technologies like Spark, Hive, NoSQL stores etc. Bachelors, Masters or PhD program in Computer Science/Statistics or a related field Ad-tech background is a plus #LI more »
experience is essential: -Proven experience as an Architect and excellent knowledge of Big Data -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
experience as an Architect and excellent knowledge of Big Data -Excellence experience across Azure -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as more »
experience in building DW/BI systems · Demonstrated ability in data modeling, ETL development, and Data warehousing. · Strong experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.) · Expertise in a BI solution like Power BI · Hands on experience in modelling databases (particularly nosql), working on indexes, materialized … with impressive visualization (Power BI) · Experience in building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products more »
databases/data stores (object storage, document or key-value stores, graph databases, column-family databases) • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR, Snowflake, and Data Mesh principles • Team player • Proactive and resilient • A passion for social good Our Mission Statement: We are an equal more »
enhanced performance and scalability. Requirements: Professional Experience: At least four years of relevant data engineering experience. Technical Proficiency: Strong familiarity with Azure Databricks, Spark, Hive, Python, and SQL. Knowledge of Azure cloud services such as Data Factory and Storage accounts is highly regarded. Data System Mastery: Proven ability to more »
it pertains to data storage and computing • Experience with data modeling, warehousing and building ETL pipelines • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR • Experience programming with at least one programming language such as C++, C#, Java, Python, Golang, PowerShell, Ruby • Experience with non-relational databases more »
and tooling used across the engineering teamRequirements:5-7 years of Java experienceCapital markets Front office experienceExperience working with Data lake (Hadoop) consumption, specifically Hive experienceKafka experienceRules engine experience (Ideally open source/vendor products, e.g. Drools or Camunda)Unix scripting knowledgeMarkets Regulatory/Trade control knowledgeAutomation experienceCloud experienceContainerisation more »
Data Engineer 6 Month Contract Inside IR35 £450/day Hiring Immediately Job Description (Apache Iceberg, Spark, Big Data) Job Details Overview: Overall IT experience of 5+ years of total experience with strong programming skills Excellent skill in Apache Iceberg, Spark, Big Data 3+ years of Big Data … project development experience Hands on experience in working areas like Apache Iceberg & Spark, Hadoop, Hive Must have knowledge in any Database Ex: Postgres, Oracle, MongoDB Excellent in SDLC Processes and DevOps knowledge (Jira, Jenkins pipeline) Working in Agile POD and with team collaboration Ability to participate in deep more »
in Unix and shell scripting. * Minimum of 1 year experience in investment banking or the financial sector. * Performance Tuning of Oracle/MySQL/Hive SQL Queries/Spark SQL Statements. * Experience in working with large databases - multi terabytes (3+ Terabytes). * Minimum of 5 years' experience in Big … Data Space (Hive, Impala, Spark Sql, HDFS etc). * Any cloud experience (AWS/Azure/Google/Oracle). * Solid experience with Oracle objects(Packages,Procedure,functions) * Very clear concepts on Oracle architecture. * Very strong debugging skills. * Proficient in query tuning. * Detail oriented * Strong written and verbal communication more »
distributed data design principles commonly used in Hadoop and a solid understanding of processing large datasets (including streaming data and unstructured data utilising HBase, Hive, Impala and Spark). You are experienced in modern engineering practices and technologies and Scrum/Kanban and SAFe delivery. You have a proven more »
Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use of Excel spread sheets for analytical purposes MSc or PhD in Data Science or an analytical subject (Physics, Mathematics, Computing more »