impressive visualization (Power BI) · Experience in building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like ApacheFlink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will more »
and transitioning towards event-driven architecture Utilizing Java 17+ with Spring/Spring Boot 3+, SQL/Oracle Effective experience with Redis, Apache Kafka, Flink or similar Striving for Serverless solutions utilizing Linux, virtualization, containers, docker, Kubernetes, potentially in production environments, but at least for testing purposes Production monitoring more »
We are cloud-native, born in AWS, and embrace their services across our platform. Our technology stack includes Python, Django, ECS, Postgres, Kinesis, Cloudfront, Flink, Elastic Search, Lambda, Amazonmq, Terraform, and Postgres. Tooling includes Datadog, Linear, Slack, Notion and CircleCI. If you think you tick the boxes then apply more »
At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product-passionate community & enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user more »
development and release engineering practices (e.g. TDD, CI/CD). · Significant experience with Apache Spark or any other distributed data programming frameworks (e.g. Flink, Arrow, MapR). · Significant experience with SQL – comfortable writing efficient SQL. · Experience using enterprise scheduling tools (e.g. Apache Airflow, Spring DataFlow, Control-M) · Experience more »
certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more »
Manchester, North West, United Kingdom Hybrid / WFH Options
Searchability NS&D Ltd
components including Data Ingest, Data Stores and REST APIs. THE DEVOPS ENGINEER SHOULD HAVE…. You must have an active eDV Clearance. Apache Nifi Flink Java Ansible Docker Kubernetes ELK stack Linux Sys Admin for deployed Clusters (10's of servers) Jenkins Pipeline development Integration/debugging Understanding complex … KEY SKILLS: SOFTWARE DEVELOPER/SOFTWARE ENGINEER/SENIOR SOFTWARE DEVELOPER/SENIOR SOFTWARE ENGINEER/DEVOPS ENGINEER/DEVOPS/APACHE NIFI/FLINK/JAVA/ANSIBLE/DOCKER/KUBERNETES/ELK STACK/TERRAFORM/LINUX/GIT more »
a fully remote position, you must be currently based in the UK to be consider. Skills/Technology Python Big Data tools (Spark, Hadoop, Flink) Data Pipelines/ETL Django/Flask more »
container technologies Strong communication and interpersonal skills Experience managing projects and working with external third party teams Ideally experience with Apache Spark or ApacheFlink (but not essential) Please note, this role is unable to provide sponsorship. If this role sounds of interest and you think your skills match more »
As a leading global animal health company, Elanco delivers innovative products and services to improve the health of pets and farm animals around the world because we believe making animals' lives better, makes life better. Since 1954, we have provided more »
a similar role. > Proficiency in programming languages such as Python, Java, or Scala. > Strong experience with data processing frameworks such as Apache Spark, ApacheFlink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing technologies such as Redshift, BigQuery more »
certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more »
certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more »
AWS Certified Data Analytics Specialty, or AWS Certified Big Data Specialty. Experience with other big data and streaming technologies such as Apache Spark, ApacheFlink, or Apache Beam. Knowledge of containerization and orchestration technologies such as Docker and Kubernetes. Experience with data lakes, NoSQL databases, and other data management more »
Kubernetes and Cloud services. Experience with Azure stack will be an asset. Experience designing and implementing event-driven/microservices applications using Apache Kafka, Flink, etc. Exposure to model deployment and serving tools like Seldon Core, KServe, etc. Experience with drift detection and adaptation techniques as well as evaluating more »
strong experience in Java. Strong experience on data quality standards and contribution on defining and monitoring data quality metrics and KPIs Key Skills Needed: Flink Beam Kafka Connect Java more »
real-time analytics solutions, preferably with experience in time-series databases. • Proficiency in technologies relevant to real-time analytics, such as KX, Kafka, Spark, Flink, and real-time visualization tools. • Demonstrated ability to lead and mentor software engineering teams. • Excellent problem-solving skills and the ability to work collaboratively more »
production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including ApacheFlink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool for more »
experience in setting up data platforms, setting standards - not just pipelines. Preferably experience in a distributed data processing environment/framework (e.g. Spark or Flink). Technologies: Java, Kotlin, Python (candidate is not expected to be proficient in one, and open to learn about the other) Kubernetes Apache Pulsar more »
production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including ApacheFlink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool for more »