enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design More ❯
enhanced DV Clearance. WE NEED THE DATA ENGINEER TO HAVE…. Current DV clearance MOD or Enhanced Experience with big data tools such as Hadoop, Cloudera or Elasticsearch Experience With Palantir Foundry Experience working in an Agile Scrum environment with tools such as Confluence/Jira Experience in design More ❯
data science/quantitative modeling to real world, financial use cases. Knowledge of open-source technologies and platforms commonly used for data analysis (e.g., Hadoop, Spark, etc.). More ❯
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and More ❯
Experience with RDS like MySQL or PostgreSQL or others, and experienced with NoSQL like Redis or MongoDB or others; * Experience with BigData technologies like Hadoop eco-system is a plus. * Excellent writing, proof-reading and editing skills. Able to create documentation that can express cloud architectures using text and More ❯
colleagues and teams in the UK, North America, and India. Key responsibilities: Collaborating with the architecture team to define best practice in Java and Hadoop development paradigms including documentation and system monitoring. Challenging and helping to direct our technical roadmap and proposing the adoption of new technology or techniques. … Providing breakdowns of project deliverables and estimates. Designing and building data pipelines and Hadoop storage objects. Assisting in the resolution of production issues when required. Mentoring team members. Working with data analysts to define logical data structures. Encouraging self-learning among the team. Essential Skills & Qualifications: A confident engineer … with an authoritative knowledge of Java and Hadoop including HDFS, Hive, and Spark. Comfortable working with large data volumes and able to demonstrate a firm understanding of logical data structures and analysis techniques. Strong skills in identifying and resolving code vulnerabilities and familiarity with utilizing Citi tools in this More ❯
Understanding of Networking concepts and protocols (DNS, TCP/IP, DHCP, HTTPS, etc.). BASIC QUALIFICATIONS - 2+ years of experience in big data/Hadoop with excellent knowledge of Hadoop architecture and administration and support. - Be able to read Java code, and basic coding/scripting ability in … with Databases (MySQL, Oracle, NoSQL) experience. - Good understanding of distributed computing environments and excellent Linux/Unix system administrator skills. PREFERRED QUALIFICATIONS - Proficient in Hadoop Map-Reduce and its Ecosystem (Zookeeper, HBASE, HDFS, Pig, Hive, Spark, etc). - Good understanding of ETL principles and how to apply them within More ❯
LiveRamp is the data collaboration platform of choice for the world's most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched More ❯
Senior Data Analytics Consultant - Public Sector and Defence Are you passionate about harnessing data to drive strategic decision-making? Join a leading technology consultancy delivering tailored solutions to high-profile clients in National Security, Defence, and the UK Civil Service. More ❯
Are you passionate about harnessing data to drive strategic decision-making? Join a leading technology consultancy delivering tailored solutions to high-profile clients in National Security, Defence, and the UK Civil Service. As part of a fast-growing team of More ❯
leading cloud big data platform for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Trino, Hadoop, Hive, and HBase. Amazon Athena is a serverless query service that simplifies analyzing data directly in Amazon S3 using standard SQL. The ODA Fundamentals … designing or architecting (design patterns, reliability, and scaling) of new and existing systems Master's degree in computer science or equivalent Experience with ApacheHadoop ecosystem applications: Hadoop, Hive, Presto, Spark, and more Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
Henderson Scott
Tech You'll Use: Languages & Tools: SQL, Python, Power BI/Tableau, XML, JavaScript Platforms & Frameworks: Azure Data Services, Microsoft Fabric (nice to have), Hadoop, Spark Reporting & Visualization: Power BI, Tableau, Business Objects Methodologies: Agile/Scrum, CI/CD pipelines What You'll Be Doing: Designing and building … SQL, Python, and BI platforms like Tableau or Power BI Strong background in data warehousing, data modelling, and statistical analysis Experience with distributed computing (Hadoop, Spark) and data profiling Skilled at explaining complex technical concepts to non-technical audiences Hands-on experience with Azure Data Services (or similar cloud More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Henderson Scott
Tech You'll Use: Languages & Tools: SQL, Python, Power BI/Tableau, XML, JavaScript Platforms & Frameworks: Azure Data Services, Microsoft Fabric (nice to have), Hadoop, Spark Reporting & Visualization: Power BI, Tableau, Business Objects Methodologies: Agile/Scrum, CI/CD pipelines What You'll Be Doing: Designing and building … SQL, Python, and BI platforms like Tableau or Power BI Strong background in data warehousing, data modelling, and statistical analysis Experience with distributed computing (Hadoop, Spark) and data profiling Skilled at explaining complex technical concepts to non-technical audiences Hands-on experience with Azure Data Services (or similar cloud More ❯
Analytics Consultant, A2C Job ID: Amazon Web Services Korea LLC Are you a Data Analytics specialist? Do you have Data Warehousing and/or Hadoop experience? Do you like to solve the most complex and high scale data challenges in the world today? Would you like a career that … with AWS services - Hands on experience leading large-scale global data warehousing and analytics projects. - Experience using some of the following: Apache Spark/Hadoop ,Flume, Kinesis, Kafka, Oozie, Hue, Zookeeper, Ranger, Elasticsearch, Avro, Hive, Pig, Impala, Spark SQL, Presto, PostgreSQL, Amazon EMR,Amazon Redshift . Our inclusive culture More ❯
technologies and analytics. Proficiency in C++, Java, Python, Shell Script; familiarity with R, Matlab, SAS Enterprise Miner. Knowledge of Elasticsearch and understanding of the Hadoop ecosystem. Experience working with large datasets and distributed computing tools such as Map/Reduce, Hadoop, Hive, Pig, etc. Advanced skills in Excel More ❯
Experience of Big Data technologies/Big Data Analytics. C++, Java, Python, Shell Script R, Matlab, SAS Enterprise Miner Elastic search and understanding of Hadoop ecosystem Experience working with large data sets, experience working with distributed computing tools like Map/Reduce, Hadoop, Hive, Pig etc. Advanced use More ❯
with clients to understand their customer behaviour through deep data analysis and predictive modelling. You’ll leverage tools such as Python, PySpark, SQL , and Hadoop to build and deploy models that influence customer strategy across areas like propensity, churn, segmentation , and more. Key responsibilities include: Developing and deploying statistical … experience working with customer data and applying predictive modelling techniques Proficiency in SQL , Python/PySpark , and exposure to big data environments such as Hadoop Commercial experience in the FMCG or retail space is highly desirable Previous experience working in a consultancy or client-facing role is a plus More ❯
Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python (Core) coding language. Pyspark … will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently More ❯
Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note: Candidate should know Scala/Python (Core) coding language. Pyspark … will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real time data processing platform (Spark Streaming) experience would be an advantage. • Consistently More ❯
industry experience maintaining a code base written in a high-level object-oriented language; formal studies or industry experience in distributed computing (e.g., MapReduce, Hadoop, AWS, DHTs, etc.); industry experience working with very large datasets; familiarity with parallel programming or parallel algorithms development; familiarity with machine learning concepts, data … Engineering); 2+ years' industry experience maintaining a code base written in a high-level object-oriented language; industry experience in distributed computing (e.g., MapReduce, Hadoop, AWS, DHTs, etc.); industry experience working with very large datasets; experience with parallel programming or parallel algorithms development; experience with machine learning concepts, data More ❯
Micro-services Container Platforms (OpenShift, Kubernetes, CRC, Docker) NoSQL DBs (Cassandra, MongoDB, HBase, Zookeeper, ArangoDB) Serialization libraries (Thrift, Protocol Buffers) Large scale data processing (Hadoop, Kafka) Dependency injection frameworks (Guice, Spring) Text search engines (Lucene, ElasticSearch) Splunk/Elastic CI/CD Build tools: Maven, Git, Jenkins Frameworks: Vert.x … generation messaging systems Backends for mobile messaging systems SIP or XMPP Soft real-time systems Experience doing performance tuning Big Data technologies, such as Hadoop, Kafka, and Cassandra, to build applications that contain petabytes of data and process millions of transactions per day Cloud computing, virtualization and containerization Continuous More ❯
Do you like our projects and want to be part of our team? If you can handle this, just apply for the right position! Hadoop/Java/R London (UK) We need a resource with at least 10 years of overall experience who has predictive analytics experience using More ❯
Demonstrable experience as a Technical Business Analyst with the ability to use Internal Tools to support high level analysis, with experience of Cloud Platforms, Hadoop Data Stores, PowerBI Reporting and SQLs. Proven experience in business intelligence reporting within Financial Services. Understanding of change management methodologies, quality assurance best practice … with the ability to work under pressure on multiple projects with differing project timeframes. SQL experience to do data analysis on Postgres, MS SQL, Hadoop Impala, etc. More ❯
and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should … do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These … migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯
and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion. Location - London Skill - ApacheHadoop We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should … do debug & fix code in the open source Apache code and should be an individual contributor to open source projects. Job description: The ApacheHadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These … migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using ApacheHadoop and support ApacheHadoop implementations both in cloud environments and on-premises. More ❯