experience with Python (2+ Years) Experience working with REST Microservices Strong SQL Experience working with very large data sets. Knowledge of big data tools (Spark, Kafka etc) Experience working in finance (Preferred) Strong formal education - ideally in Computer Science If this sounds of interest, then please do not hesitate more »
analytical abilities. Preferred Skills: Experience with cloud databases (e.g., AWS, Azure, Google Cloud Platform). Knowledge of big data tools and frameworks (e.g., Hadoop, Spark). Certification in database management (e.g., Microsoft Certified: Azure Data Engineer Associate). What We Offer: Competitive salary and comprehensive benefits package. Opportunity to more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
need broad expertise across various areas of the technology/software domain. Proficiency in AWS or Big Data, Hadoop or other SQL databases, Lucene, Spark, web app development (JavaScript, Node.js), Docker, Jenkins, Git, Python, or Ruby would be highly beneficial. Key Responsibilities: Meet with clients throughout the sales and more »
Pharmaceutical industry experience Experience in distributed system design Experience with Pure/Alloy Working knowledge of open-source tools such as AWS lambda, Prometheus Spark, Hadoop orSnowflake knowledge would be a plus. Additional Information Location: This role can be delivered in a hybrid nature from one of these offices more »
Hybrid-3 days a week JOB DETAILS Must Have - Platform engineer, Azure DevOps and CI/CD tools, Azure Cloud, Microsoft Fabric, Azure Services, ApacheSpark, Experience of using IAC (terraform, APIs), Data Engineer, Big Data, PySpark more »
master and meta data management Experience with Azure SQL Database, Azure Data Factory, Azure Storage, Azure IaaS/PaaS related database implementations. Experience with Apachespark and new Fabric framework would be a plus. more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
Service, tackling the ML Ops loop as a service, interesting challenge. They are going to enable edge AI, another pioneering aspect. Tech stack: Athena, Spark, ECS, Temporal, Python, Flask, Redis, Postgres, React, Plotly, Docker more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed Strong consulting more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed in a more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You’ll need to come from a strong academic background with some commercial experience in a data heavy software more »
reports. 7. Knowledge of data integration techniques and tools (e.g., SSIS, Informatica) is desirable. 8. Experience in working with big data technologies (e.g., Hadoop, Spark) is a plus. 9. Excellent communication and collaboration skills, with the ability to effectively interact with technical and non-technical stakeholders. 10. Strong attention more »
and manage data lake/data warehouse platforms. (Some of the following types of providers: AWS, Microsoft Azure, Google Cloud Platform, Databricks, Snowflake, Cloudera, Spark, MongoDB) Done this at companies using high volumes of data, ideally in retailing. Other sectors where used high volume data would also be relevant more »
processes, and data warehousing. - Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. - Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apache Spark. - Experience with SQL and database management systems such as Oracle, MySQL or PostgreSQL. - Strong understanding of data governance, data quality, data contracts, and data security best practices. - Exposure more »
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
and implement pre-processing pipelines for large data, create visualisation and reports for model performance, whilst collaborating with various engineers to improve knowledge and spark innovation. As the Machine Learning Engineer you will ideally have a degree in a relevant industry; Computer Science, Maths, AI, or similar, at least more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
in Python, R, and SQL. Extensive experience (over 5 years) in building Machine Learning models. Understanding of underlying data systems like Cloud architectures, K8S, Spark, and SQL. Fluency in English and German, with French being a plus. Desirable experience in Consulting or Customer-facing Data Science roles, Data Engineering more »
you! Minimum Qualifications Bachelors or Masters Degree in Engineering or Computer Applications Hands-on experience with MS SQL Server and GCP Familiarity with BQ, Spark, Hive, Pig, and other analytical tools. Understanding of finance domain. Preferred Qualification Experience in SAP data modelling Genpact is an Equal Opportunity Employer and more »
engineer will supply advice to analytical users on how they can access and utilise the new datasets. Qualities Comfortable with Python - ideally experience with ApacheSpark and Pyspark Previous data analytics software experience Able to scope new integrations and translate analytical user needs into technical requirement. UK based more »
ideally in a start-up or scale-up. - Machine learning libraries and frameworks (TensorFlow, PyTorch, scikit-learn). - Python - Big data processing tools (e.g., Spark). The role offers a salary range of between £70-100K depending on experience. The successful candidate must be able to work from more »
product experimentation, Causal AI, and advanced statistical techniques. Deep knowledge of data science tools (e.g., scikit-learn, TensorFlow, PyTorch) and big data technologies (e.g., Spark). Proficiency in Python for data manipulation, model building, and scripting. Strong communication skills to present findings to both technical and non-technical audiences. more »
ingestion pipelines. Requirements: Proven experience working with Python or Java or C# Experience working with ELT/ELT technologies such as Airflow, Argo, Dagster, Spark, Hive Strong technical expertise, especially in data processing and exploration, with a willingness to learn new technologies. A passion for automation and driving continual more »