Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage more »
Technical Discipline. Technical Expertise: Proficiency in SQL and experience with cloud-based data pipelines (Azure, AWS, GCP). Familiarity with big data tools like Hadoop and Spark. Data Management Skills: Hands-on experience working with large data sets, data pipelines, workflow management tools, and Azure cloud services. Exposure to more »
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Workday
algorithms and data structures A proactive mindset with excellent problem-solving and communication skills Experience with big data technologies such as Apache Kafka, Spark, Hadoop, or similar systems. Preferred Skills: Demonstrated experience with scripting languages like Python, Bash, etc Testing and troubleshooting skills with the ability to walk from more »
limited to: Backend technology, Python. Databases like MSSQL. Front-end technology, Java. Cloud platform, AWS. Programming language, JavaScript (React.js) Big data technologies such as Hadoop, Spark, or Kafka. What We Need from You: Essential Skills: A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency more »
Cloud ML Engine , Azure Data Lake , Azure Databricks or GCP Cloud Dataproc . Familiarity with big data technologies and distributed computing frameworks, such as Hadoop, Spark, or Apache Flink. Experience scaling an “API-Ecosystem ”, designing, and implementing “API-First” integration patterns. Experience working with authentication and authorisation protocols/ more »
with structured and unstructured data, and curate data to provide real-time contextualized insights. Manage full data lifecycle , experience using Microsoft Azure, SQL server, Hadoop ecosystem, Spark, and Kafka, and building capabilities to host a wider set of technologies. When team expands mentoring of new data team members Adopt … Skills & Experience you will have: Experience working successfully within a start up or scale up business Previous work with big data tech such as Hadoop, Spark, Kafka. Good working knowledge of the use of containers including Docker and Kubernetes , and experience with working in Microsoft Azure platform. Coding/ more »
Job Title: Senior Data Engineer (Machine Learning Start-up) Company: Our client are reshaping the e-commerce and retail industry with innovative data-driven solutions. Location: Flexible/Remote About Us: We're a passionate team dedicated to pushing the more »
Company Description Petrolink is a global independent and neutral wellsite data solutions company that provides services in major oil and gas regions worldwide. Our specialties include visualization, data analytics, and data interoperability. Our technologies and services drive down the cost more »
Experience with modeling tools such as PyTorch, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Experience in building speech recognition, machine translation and natural language processing systems (e.g., commercial speech products or government speech projects) Amazon is more »
the following platforms: MySQL or Cassandra. Experience of developing and deploying applications into AWS or a private cloud. Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, UI Development. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our more »
Bedford, England, United Kingdom Hybrid / WFH Options
Understanding Recruitment
PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
Job Description Software Engineering Lead Data- SQL Server, Oracle, Python, Hadoop, ETL Do you enjoy leading a team that develops high-quality code? Are you A highly visible champion with a ‘can do’ attitude and enthusiasm that inspires others? About the Business : LexisNexis Risk Solutions is the essential partner … point of escalation for software development issues within specific area of responsibility Requirements Have experience of working with Databases for e.g. SQL Server, MYSQL, Hadoop, Oracle or any other RDBMS. Be able to partner and lead internal and external technology resources in solving complex business needs Be able to more »
Romsey, Hampshire, United Kingdom Hybrid / WFH Options
CBSbutler Holdings Limited trading as CBSbutler
Experience with cloud technologies i.e. AWS or Azure Programming language experience i.e. Java, Python, node.js or SQL Data technologies experience i.e. PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (url removed more »
Woking, Surrey, United Kingdom Hybrid / WFH Options
CBSbutler Holdings Limited trading as CBSbutler
Experience with cloud technologies i.e. AWS or Azure Programming language experience i.e. Java, Python, node.js or SQL Data technologies experience i.e. PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (url removed more »
tools (EKS, EMR, Redshift, Postgres) Experience, or interest in orchestration and processing methods of large workloads using distributed data systems such as Airflow, Spark, Hadoop, and Trino Experience, or interest in processing real-time streaming workloads with distributed event systems such as Kafka, Kinesis, Google PubSub, Flink, or Spark more »
Cheltenham, England, United Kingdom Hybrid / WFH Options
Ripjar
the nuances of dealing with structured and unstructured data, and be experienced in using databases (Mongo ideally) Experience with Linux Experience with Spark (Pyspark), Hadoop or other Big data technologies would be beneficial, but not required Benefits Why we think you'll enjoy it here: Base Salary of up more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
best-of-breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
dealing with streaming and batch compute frameworks like Spring Kafka, Kafka Streams, Flink, Spark Streaming, Spark Experience with large-scale computing platforms, such as Hadoop, Hive, Spark, NoSQL stores Experience with developing large-scale data pipelines is nice to have Exposure to UI development is nice to have #LI more »
best of breed Java toolsets - focused on MicroServices Architectures, powerful front- and backend frameworks, RESTful services, and everything from NoSQL databases like MongoDB and Hadoop, high-performance data grids like HazelCast to multi-node relational systems. You will be working in a Scrum Team of cross-functional skills in more »
the initial point of escalation for software development issues within specific area of responsibility Requirements Have experience of working with Databases SQL Server, MYSQL, Hadoop, Oracle or any other RDBMS. Be able to partner and lead internal and external technology resources in solving complex business needs Be able to more »
capabilities such as GitHub, Jenkins and UrbanCode methodologies, demonstrating engineering excellence and a passion for automation Additionally, data domain technology experience including: Teradata, Oracle, Hadoop, DB2 and familiarity with NoSQL databases such as Cassandra and HBase Expertise in Cloud Native technologies including networking & security is a plus Understanding how more »