Responsibilities: Analyze, design, develop, and test software solutions. Utilize Spring Boot, Spark (Big Data), and Message Bus Architecture. Work with containerization technologies like Kubernetes. Manage cloud infrastructure on AWS. Implement and maintain CI/CD pipelines using Jenkins. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related … field. Proficiency in software engineering with Java and Spring, or other major programming languages. Preferred experience with Spring Boot, Spark (Big Data), containerization, AWS, and CI/CD pipelines. More ❯
in Terraform, Kubernetes, Shell/Powershell scripting, CI/CD pipelines (GitLab, Jenkins), Azure DevOps, IaC, and experience with big data platforms like Cloudera, Spark, and Azure Data Factory/DataBricks. Key Responsibilities: Implement and maintain Infrastructure as Code (IaC) using Terraform, Shell/Powershell scripting, and CI/… teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills and Technologies: Terraform : Essential … streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure cloud environments. Strong More ❯
in Terraform, Kubernetes, Shell/Powershell scripting, CI/CD pipelines (GitLab, Jenkins), Azure DevOps, IaC, and experience with big data platforms like Cloudera, Spark, and Azure Data Factory/DataBricks. Key Responsibilities: Implement and maintain Infrastructure as Code (IaC) using Terraform, Shell/Powershell scripting, and CI/… teams to design the overall solution architecture for end-to-end data flows. Utilize big data technologies such as Cloudera, Hue, Hive, HDFS, and Spark for data processing and storage. Ensure smooth data management for marketing consent and master data management (MDM) systems. Key Skills and Technologies: Terraform : Essential … streamlined development workflows. Azure Data Factory/DataBricks : Experience with these services is a plus for handling complex data processes. Cloudera (Hue, Hive, HDFS, Spark) : Experience with these big data tools is highly desirable for data processing. Azure DevOps, Vault : Core skills for working in Azure cloud environments. Strong More ❯
eCommerce or conversion rate optimisation-focused environment is a plus. Hands-on experience with Machine & Deep Learning, AI and Neural Networks tools including Python, Spark, Tensor Flow. Competencies across core programming languages including Python, Java, C/C++, R. Ability to work in a cross-functional environment, managing stakeholders … including experience with Bayesian statistics, linear algebra and MVT calculus, advanced data modelling and algorithm design experience. Design and deployment experience using Tensor Flow, Spark ML, CNTK, Torch or Caffe. The perks A flexible environment that allows 1-2 days of remote working per week. 28 days holiday + More ❯
and projects - and depending on your strengths and interests, you'll have the opportunity to move between them. Technologies we use Java, Kotlin, Kubernetes, Apache Kafka, GCP, BigQuery, Spark Our Culture While we're looking for professional skills, culture is just as important to us. We understand that … brokers (Kafka, RabbitMQ, Pulsar, etc.). Experience in setting up data platforms and standards, not just pipelines. Experience with distributed data processing frameworks (e.g., Spark or Flink). About the Team J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's More ❯
Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note … Candidate should know Scala/Python (Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real … time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent analytical and process-based skills, i.e. More ❯
Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13 billion. Job Description: ============= Spark - Must have Scala - Must Have Hive & SQL - Must Have Hadoop - Must Have Communication - Must Have Banking/Capital Markets Domain - Good to have Note … Candidate should know Scala/Python (Core) coding language. Pyspark profile will not help here. Scala/Spark • Good Big Data resource with the below Skillset: § Spark § Scala § Hive/HDFS/HQL • Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.) • Experience in Big data technologies, real … time data processing platform (Spark Streaming) experience would be an advantage. • Consistently demonstrates clear and concise written and verbal communication • A history of delivering against agreed objectives • Ability to multi-task and work under pressure • Demonstrated problem solving and decision-making skills • Excellent analytical and process-based skills, i.e. More ❯
Experience in building machine learning models for business application Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. More ❯
Experience in building machine learning models for business application Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. More ❯
Experience in building machine learning models for business application Experience in applied research PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Amazon is an equal opportunities employer. More ❯
responsible for helping to evolve our Data Service, BI architecture, and tools. The current technology stack is Linux based running within AWS, built upon Spark EMR, Kafka, EKS, Angular, and Java to provide a modern streaming and scalable platform. The right candidate will be seeking to share ideas and … the build/deployment pipeline. Full stack experience though primarily Java, GraphQL, Spring Framework, and Angular 8+. Secondary focus on wider technology stack (Redshift, Spark, etc.). Application Profiling and Tuning. Work cross-functionally with various teams, creating solutions that deal with large volumes of data. Work with other More ❯
in the areas of ML and causal inference for downstream impact estimation. The ideal candidate will have knowledge of at least one of ray, spark or rapidsai framework to accelerate model training. A background in causal inference (e.g. Double ML) is a plus but not required. This is the … scientists and engineers, all based in Tokyo, Japan. We are a team that thrives on growth, both personal and professional. Engage in academic collaborations, spark innovation in hackathons, and expand your horizons with conference visits. Key job responsibilities As a Senior Applied Scientist, your responsibilities will be: - Work closely More ❯
My leading Global Consultancy client are looking for a Lead Data Architect to help shape their strategic data architecture, ensuring alignment with business goals, and driving innovation in investment data infrastructure. You'll work in the Investment Management space on More ❯
london, south east england, united kingdom Hybrid / WFH Options
Hunter Bond
My leading Global Consultancy client are looking for a Lead Data Architect to help shape their strategic data architecture, ensuring alignment with business goals, and driving innovation in investment data infrastructure. You'll work in the Investment Management space on More ❯
About us At Urban Jungle, we're making insurance fair - to people, planet and wallets. We're one of the fastest-growing businesses in the UK, working to fix one of the biggest industries in the world. We put customers More ❯
structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing PREFERRED QUALIFICATIONS - Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. - Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to More ❯