Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to create data pipelines on a more »
SQL and experience with relational databases like PostgreSQL, MySQL Strong programming skills in Python, Java, or Scala Experience with big data technologies such as Hadoop, Spark, or Kafka Familiarity with cloud platforms and their data services Knowledge of data warehousing solutions Your soft skills: Excellent problem-solving skills and more »
CD tooling Scripting experience (Python, Perl, Bash, etc.) ELK (Elastic stack) JavaScript Cypress Linux experience Search engine technology (e.g., Elasticsearch) Big Data Technology experience (Hadoop, Spark, Kafka, etc.) Microservice and cloud native architecture Desirable Skills Able to demonstrate experience of troubleshooting and diagnosis of technical issues. Able to demonstrate more »
through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from various more »
effective deployment on these platforms. Database Management: Skilled in both relational and NoSQL databases, including MongoDB, Neo4j, and Redis. Additional Skills: Experience with Spark, Hadoop, or similar systems. Practical knowledge of Docker, Kubernetes, or similar technologies. Familiarity with CI/CD tools like CircleCI, Jenkins, etc. Tech curiosity with more »
Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object more »
modeling, data access, and data storage techniques. Excellent problem-solving skills and the ability to think algorithmically. Desirable Skills: Knowledge of big data technologies (Hadoop, Spark, Kafka) is highly desirable. Familiarity with data governance and compliance requirements. more »
as DBT, FiveTran, etc. Understanding of Agile Delivery best practice Good knowledge of the relevant technologies e.g. SQL, Oracle, PostgreSQL, Python, ETL pipelines, Airflow, Hadoop, Parquet. Strong problem-solving and analytical abilities. Ability to present solutions and limitations to non-IT business experts ABOUT YOU Integrity, respect, intellectually curious more »
experience applying data analytics Hands-on experience with intelligent systems and machine learning Experience with interpretability of deep learning models Big Data Skills (Azure, Hadoop, Spark, recent deep learning platforms) Experience with text mining tools and techniques including in areas of summarization, search (e.g. ELK Stack), entity extraction, training more »
Azure SQL Data Warehouse, Azure Data Lake, AWS S3,AWS RDS,AWS Lambda or similar Have experience with Open Source big data products i.e. Hadoop Hive, Pig, Impala or similar Have experience with Open Source non-relational or NoSQL data repositories such as: MongoDB, Cassandra, Neo4J or similar Be more »
Newcastle upon Tyne, Northumberland, United Kingdom
Confidential
Azure SQL Data Warehouse, Azure Data Lake, AWS S3,AWS RDS,AWS Lambda or similar Have experience with Open Source big data products i.e. Hadoop Hive, Pig, Impala or similar Have experience with Open Source non-relational or NoSQL data repositories such as: MongoDB, Cassandra, Neo4J or similar Be more »
Azure SQL Data Warehouse, Azure Data Lake, AWS S3,AWS RDS,AWS Lambda or similar Have experience with Open Source big data products i.e. Hadoop Hive, Pig, Impala or similar Have experience with Open Source non-relational or NoSQL data repositories such as: MongoDB, Cassandra, Neo4J or similar Be more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »
and managing Spark clusters. Demonstrated experience tuning and optimizing Apache Spark. Demonstrated experience with the DataBricks platform and DataBricks API & CLI. Demonstrated experience understanding Hadoop Distributed File System (HDFS). Demonstrated experience understanding DataBricks File System (DBFS). Demonstrated experience with infrastructure as code (IaC) technologies including AWS Cloud more »