impressive visualization (Power BI) · Experience in building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like ApacheFlink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will more »
to take Machine Learning models and implement them as part of data pipeline IT platform implementation experience Experience with one or more relevant tools ( Flink, Spark, Sqoop, Flume, Kafka, Amazon Kinesis) Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) Current hands-on implementation more »
Linux, PowerShell, Shell Scripting, HTML and CSS. • Different types of database, e.g. Google Cloud Spanner, Firestore, Bigtable and BigQuery, as well as MongoDB, MySQL, Flink, Cassandra, SQL Server, Postgres. Developing schema for supporting business logic and data. • Microservices architecture, including containers and serverless implementation. E.g. Kubernetes, Docker, OpenShift, AWS more »
certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more »
certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more »
Chicago, Illinois, United States Hybrid / WFH Options
Request Technology - Robyn Honquest
Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, ApacheFlink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible more »
Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, ApacheFlink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required more »
Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, ApacheFlink etc. (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required more »
London, England, United Kingdom Hybrid / WFH Options
Client Server
production, providing subject matter expertise on the .Net stack and contributing to technical design discussions. You'll use a range of technology including ApacheFlink with Java for large scale data processing and will be able to assess and recommend new and emerging technologies, using the best tool for more »
environments. Knowledge of data modeling, database design, and query optimization techniques. Experience with real-time data processing, streaming, and analytics technologies (e.g., Kafka, ApacheFlink). Familiarity with financial markets, trading systems, and quantitative analysis is a plus. Excellent problem-solving, analytical, and communication skills, with the ability to more »
time data streaming platforms such as Kafka/Confluent/Google Cloud Pub/Sub Experience with stream processing frameworks such as Faust/Flink/Kafka Streams or similar Great Python skills Experience mentoring more junior team members Comfortable with ELT pipelines and the full data lifecycle Comfortable more »
and ideally hands-on experience with data streaming, event-based architectures and Kafka Strong communication and interpersonal skills Experience with Apache Spark or ApacheFlink would be ideal, but not essential Please note, this role is unable to provide sponsorship. If this role sounds of interest and you think more »
We are cloud-native, born in AWS, and embrace their services across our platform. Our technology stack includes Python, Django, ECS, Postgres, Kinesis, Cloudfront, Flink, Elastic Search, Lambda, Amazonmq, Terraform, and Postgres. Tooling includes Datadog, Linear, Slack, Notion and CircleCI. If you think you tick the boxes then apply more »
At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product-passionate community & enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user more »
Greater Bristol Area, United Kingdom Hybrid / WFH Options
Anson McCade
tools and languages E.g. SQL, Unix, CLI tools, PowerShell, Shell Scripting, HTML and CSS • External and embedded databases, relational and NoSQL, E.g. MongoDB, MySQL, Flink, Cassandra, SQL Server, Postgres. Developing schema for supporting business logic and data • Experience in working across production and non-production instances of services Key more »
Basic networking skills Solid experience triaging and monitoring complex issues, outages, and incidents Experience with integrating/maintaining various 3rd party tools like ZooKeeper, Flink, Pinot, Prometheus, and Grafana. Experience with Platform-as-a-Service (PaaS) using Red Hat OpenShift/Kubernetes and Docker containers Experience working on Agile … Certified Developer for Apache Kafka (CCDAK) Practical experience with event-driven applications and at least one event processing framework, such as Kafka Streams, ApacheFlink, or ksqlDB. Understanding of Domain Driven Design (DDD) and experience applying DDD patterns in software development. Experience working with Kafka connectors and/or more »
expertise in migrating Legacy Oracle databases to AWS managed databases and designing data engineering solutions utilising AWS services such as AWS Glue, AWS Managed Flink, S3, AWS RDS, and Lambda functions. Job Description: As an AWS Architect specialising in migration and data engineering, you will play a key role … and other cloud-native data storage solutions. Additionally, you will design and implement ETL processes and data engineering solutions using AWS Glue, AWS Managed Flink, S3, AWS RDS, and Lambda functions to ensure efficient data processing and analytics capabilities in the cloud environment. Key Responsibilities: Lead the design and … ETL processes using AWS Glue to extract, transform, and load data from various sources into AWS Managed RDS, Document DBs, APIs. Utilise AWS Managed Flink for Real Time data processing and analytics, ensuring high performance and scalability. Design and implement data engineering solutions using AWS S3, AWS RDS, and more »