explain and present the findings of technical work to non-expert audiences Fluency with Python machine learning and data science packages (pandas, scikit-learn, Apache, Spark, DASK, Tensorflow, etc.) or experience with programming languages and willingness to learn Python For engineering, experience in a DevOps role, ideally in more »
as a Lead Big Data Engineer with excellent knowledge of Big Data -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
City of London, London, United Kingdom Hybrid / WFH Options
Oliver Bernard Ltd
Experience with a JVM language, Kotlin, Java, Scala, Clojure Knowledge of Typescript and React is beneficial Exposure to data pipelines using technologies such as Spark and Kafka Experience with cloud services (ideally AWS) Hybrid working 1-2 days per week in Central London. £110,000 depending on experience. Please more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
of Big Data -Great understanding of Cloud e.g. Azure and or AWS -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
experience with Python (2+ Years) Experience working with REST Microservices Strong SQL Experience working with very large data sets. Knowledge of big data tools (Spark, Kafka etc) Experience working in finance (Preferred) Strong formal education - ideally in Computer Science If this sounds of interest, then please do not hesitate more »
o Must have 8 years’ Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies e g Oracle SQL Server Postgres Spark Hadoop other Open Source o Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management o Must years more »
ll also get exposure to Python, lots of SQL (of course) and depending on your level of experience, data stream processing tools like Kafka, Spark, etc. As this company continues to build new platforms and modernise, you’ll also be exposed to the cloud and various other modern tools more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
need broad expertise across various areas of the technology/software domain. Proficiency in AWS or Big Data, Hadoop or other SQL databases, Lucene, Spark, web app development (JavaScript, Node.js), Docker, Jenkins, Git, Python, or Ruby would be highly beneficial. Key Responsibilities: Meet with clients throughout the sales and more »
o Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). o Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) o Must more »
East London, London, United Kingdom Hybrid / WFH Options
Be Technology
systems. Deep knowledge of distributed and scalable systems, including proficiency with PostgreSQL, Ray, RabbitMQ, and Cassandra. Familiarity with big data technologies such as Hadoop, Spark, or Kafka . Experience with CI/CD Strong problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Excellent communication more »
Greater London, England, United Kingdom Hybrid / WFH Options
Oliver Bernard
an Architect and excellent knowledge of Big Data -Excellence experience across Azure -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
for business improvements Lead a small team of data scientist on Neural Networks LLMs (CNN & RNN), ML, & NLP NLP/AI/ML/Spark/Python/Data scientist/Machine Learning Engineer/OCR/Deep Learning Requirements Bachelor's degree or equivalent experience in quantitative field more »
Platforms Must have 8+ years’ Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed in a more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed- in a more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You’ll need to come from a strong academic background with some commercial experience in a data heavy software more »
reports. 7. Knowledge of data integration techniques and tools (e.g., SSIS, Informatica) is desirable. 8. Experience in working with big data technologies (e.g., Hadoop, Spark) is a plus. 9. Excellent communication and collaboration skills, with the ability to effectively interact with technical and non-technical stakeholders. 10. Strong attention more »
working with AWS technologies such as Lambda, ECS Fargate, API Gateway, RDS, DynamoDB, EMR building customer-facing applications and APIs building data pipelines using Spark + Scala that process Tb of data per day working with customers to understand the business context of new features participating in design reviews more »
essential: -Proven experience as an Architect and excellent knowledge of Big Data -Excellent knowledge of Hadoop and tools such as Hbase/Hive and Spark etc -Excellent experience of ETL, data warehousing and handling a variety of data types -Very strong knowledge of database technologies such as NoSQL, Relational more »
in Python, R, and SQL. Extensive experience (over 5 years) in building Machine Learning models. Understanding of underlying data systems like Cloud architectures, K8S, Spark, and SQL. Fluency in English and German, with French being a plus. Desirable experience in Consulting or Customer-facing Data Science roles, Data Engineering more »
you! Minimum Qualifications Bachelors or Masters Degree in Engineering or Computer Applications Hands-on experience with MS SQL Server and GCP Familiarity with BQ, Spark, Hive, Pig, and other analytical tools. Understanding of finance domain. Preferred Qualification Experience in SAP data modelling Genpact is an Equal Opportunity Employer and more »
product experimentation, Causal AI, and advanced statistical techniques. Deep knowledge of data science tools (e.g., scikit-learn, TensorFlow, PyTorch) and big data technologies (e.g., Spark). Proficiency in Python for data manipulation, model building, and scripting. Strong communication skills to present findings to both technical and non-technical audiences. more »
ingestion pipelines. Requirements: Proven experience working with Python or Java or C# Experience working with ELT/ELT technologies such as Airflow, Argo, Dagster, Spark, Hive Strong technical expertise, especially in data processing and exploration, with a willingness to learn new technologies. A passion for automation and driving continual more »
in architecture of the data workloads in EMR\Clusters. Designs system architecture to integrate easily with other AWS services. Strong background in technologies like Spark, Hive and Pyspark. Key Experience: Experience of managing the full life cycle of a data platform solution in AWS. Experience of leading AWS cloud more »