you! Minimum Qualifications Bachelors or Masters Degree in Engineering or Computer Applications Hands-on experience with MS SQL Server and GCP Familiarity with BQ, Spark, Hive, Pig, and other analytical tools. Understanding of finance domain. Preferred Qualification Experience in SAP data modelling Genpact is an Equal Opportunity Employer and more »
wrangling, modelling, feature engineering and deployment What’s equally relevant is programming knowledge and a gift for coding using: R, Python, SQL, SAS or Spark You’ll also need the capability to deliver projects, collaborate with operational teams and be able to explain technical concepts to non-specialists What more »
wrangling, modelling, feature engineering and deployment What’s equally relevant is programming knowledge and a gift for coding using: R, Python, SQL, SAS or Spark You’ll also need the capability to deliver projects, collaborate with operational teams and be able to explain technical concepts to non-specialists What more »
Sheffield, England, United Kingdom Hybrid / WFH Options
Undisclosed
knowledge of security principles and best practices for cloud-based solutions. Preferred Skills: Certification in cloud platforms. Experience with big data technologies such as Apache Hadoop, Spark, or Kafka. Knowledge of data governance and compliance frameworks. Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). HSBC more »
in programming languages commonly used in machine learning, preferably Python. Experience with machine learning frameworks and libraries, such as TensorFlow, PyTorch, scikit-learn, or Apache Spark. Proven track record of developing and implementing machine learning solutions in a professional setting. Passion for exploring new technologies and driving innovation in more »
product experimentation, Causal AI, and advanced statistical techniques. Deep knowledge of data science tools (e.g., scikit-learn, TensorFlow, PyTorch) and big data technologies (e.g., Spark). Proficiency in Python for data manipulation, model building, and scripting. Strong communication skills to present findings to both technical and non-technical audiences. more »
ingestion pipelines. Requirements: Proven experience working with Python or Java or C# Experience working with ELT/ELT technologies such as Airflow, Argo, Dagster, Spark, Hive Strong technical expertise, especially in data processing and exploration, with a willingness to learn new technologies. A passion for automation and driving continual more »
in architecture of the data workloads in EMR\Clusters. Designs system architecture to integrate easily with other AWS services. Strong background in technologies like Spark, Hive and Pyspark. Key Experience: Experience of managing the full life cycle of a data platform solution in AWS. Experience of leading AWS cloud more »
analytics. The Client would also like to see experience of managing and leading a team of Data Scientists. Should have experience of SCALA/SPARK and Hadoop. Initially this is a 3 month contract assignment in Canary Wharf - with likelihood that it will go on beyond that point. Location more »
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
major advantage Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
network programming. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate and the like. Hands-on with infrastructure-as-code tools and automation, such as Terraform, Ansible, or Helm. The role Tech Lead responsible more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
equity financing to mid-market and late-stage companies. Liquidity Group is backed by leading global financial institutions including Japan’s largest bank, MUFG, Spark Capital, and Apollo Asset Management. About the Role We are looking for highly motivated credit professionals who are able to work independently and in more »
Scala, or Kotlin. Experience with at least one of the following cloud providers: Amazon Web Services (AWS), Google Cloud Compute (GCP), or Microsoft Azure. Spark, Hive, or Presto. Desirable Skills Familiarity with the Scala programming language and popular frameworks such as: Cats, Cats Effect, ZIO, and http4s. Familiarity with … best practices. Familiarity with Amazon Web Services (AWS), Terraform, and infrastructure as code (IaC) best practices. Familiarity with Python programming language when applied to Spark and machine learning. Familiarity with Databricks and Apache Airflow products. Required Education & Experience Bachelor’s degree in Computer Science, Information Systems, Software, Electrical more »
and Data Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI … Azure Cloud environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS more »
Responsibilities: As a pivotal member of our rapidly expanding business, your role involves: Designing and Maintaining Data Architectures and Pipelines: Create and optimize robust data architectures and pipelines. Assemble intricate data sets that align with both functional and non-functional more »
Birmingham, England, United Kingdom Hybrid / WFH Options
Lorien
in SQL, NoSQL, Blob,Delta Lake, and other enterprise scale data stores. Data Orchestration - Enterprise scale usage of technology such as Azure Data Factory, Apache Airflow, Logic Apps, DBT, SnapLogic, Spark or similar tools. Software Tooling - GIT/GitHub, CI/CD, deployment tools like Octopus, Terraform infrastructure more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Luton, England, United Kingdom Hybrid / WFH Options
Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Senitor Associates Limited
within Software Engineering to explore new technologies. Contribute to a team culture that prioritizes diversity, equity, inclusion, and respect. Required Skills Expertise in Java , Spark, SQL, Relational DB, Spark, NoSQL, focusing on performance optimization. A thorough understanding of the Software Development Life Cycle and agile methodologies, including CI more »
. Preferably experience in setting up data platforms, setting standards - not just pipelines. Preferably experience in a distributed data processing environment/framework (e.g. Spark or Flink). Technologies: Java, Kotlin, Python (candidate is not expected to be proficient in one, and open to learn about the other) Kubernetes … Apache Pulsar GCP, BigQuery, BigTable, Spark (Note: we are at an early stage, these might change these if the team decides there is a better fit) #ICBengineering #ICBcareers ABOUT US J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's more »