in Go Experience keeping documentation up to date with CI/CD pipelines We mostly use GitHub Actions, but any experience with a comparable platforms is great Experience with Kafka and its various extensions is advantageous Perks and Benefits Stock options. 25 days PTO + public holidays. Top-tier private health insurance package. Employee referral scheme. Company-wide events More ❯
cloud platforms (e.g., AWS, Azure, Google Cloud) Working knowledge of Web application APIs, Database (RDBMS, NoSQL), Webservices (REST, SOAP), Microservices, Middleware components like IBM MQ, RabbitMQ, Streaming services like Kafka, Containers (Kubernetes, OpenShift) Positive attitude and outlook, determined, enthusiastic, and resilient. Quick learner within a fast-paced environment. Ability to work independently with agreed remit and autonomy. A strong More ❯
cloud platforms (e.g., AWS, Azure, Google Cloud) Working knowledge of Web application APIs, Database (RDBMS, NoSQL), Webservices (REST, SOAP), Microservices, Middleware components like IBM MQ, RabbitMQ, Streaming services like Kafka, Containers (Kubernetes, OpenShift) Positive attitude and outlook, determined, enthusiastic, and resilient. Quick learner within a fast-paced environment. Ability to work independently with agreed remit and autonomy. A strong More ❯
modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi More ❯
modelin g , SQL, NoSQL databases, and data warehousing . Hands-on experience with data pipeline development, ETL processes, and big data technolo g ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi More ❯
and non-technical stakeholders A background in software engineering, MLOps, or data engineering with production ML experience Nice to have: Familiarity with streaming or event-driven ML architectures (e.g. Kafka, Flink, Spark Structured Streaming) Experience working in regulated domains such as insurance, finance, or healthcare Exposure to large language models (LLMs), vector databases, or RAG pipelines Experience building or More ❯
side code adheres to the latest best practices; utilizing Python >3.10, extensive test coverage, local development with Docker and automated packaging and deployment alongside leveraging open-source technologies like Kafka, RabbitMQ, Redis, Cassandra and Zookeeper. By joining our team, youll have the opportunity to work on a modern tech stack that blends infrastructure (~80%) and application development (~20%) whilst More ❯
functional central projects. Strong background in cloud computing and microservices architecture, preferably with Google Cloud (GCP). Solid understanding of message brokers, event-driven architectures, and asynchronous communication (e.g., Kafka, Pub/Sub, RabbitMQ). Experience designing and documenting APIs, data models, and system integrations using OpenAPI 3.0. Ability to analyze business requirements and translate them into scalable, AI More ❯
end-to-end, deploy to production frequently, and see the real-world impact of what they build. You'll also get to work with a modern tech stack: Kotlin, Kafka, Kubernetes, Docker, AWS, Aurora Postgres and more. We work hybrid, with at least 2 days a week together in our Manchester office. A day in the life: Leading a More ❯
specialist, design and architecture experience - 7+ years of external or internal customer facing, complex and large scale project management experience - 5+ years of database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) experience - 3+ years of cloud based solution (AWS or equivalent), system, network and operating system experience PREFERRED QUALIFICATIONS - AWS experience preferred, with proficiency in a wide range of More ❯
While not mandatory, experience with these technologies is a significant advantage. Event-Driven Architectures , FinOps and Cost Optimization (Optional): Contribute to the development of event-driven data pipelines using Kafka and schema registries, enabling real-time data insights and responsiveness. Apply FinOps principles and multi-cloud cost optimization techniques to ensure efficient resource utilization and cost control. What You More ❯
3+ years of experience in cloud architecture and implementation - Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience - Experience in database (eg. SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) - Experience in consulting, design and implementation of serverless distributed solutions - Experience in software development with object oriented language PREFERRED QUALIFICATIONS - AWS experience preferred, with proficiency in a wide More ❯
and technologies: React and Typescript for our frontend Jest for tests SwiftUI for our Driver iOS App Python for our backend code Postgres for data storage Redis for caching Kafka for stream processing AWS, Terraform, GitLab CI/CD, Docker and ECS to deploy and run our services Flutter for our on-board server running Android, which handles concession More ❯
including schema design, indexing, and caching strategies for low-latency services. Experience with market data infrastructure and back testing frameworks : hands on building and operating real time data pipelines (Kafka/Redpanda, ClickHouse/InfluxDB) and authoring production grade back testing and simulation systems in Python or Go. Ideal Candidate Profile A creative problem-solver who is eager to More ❯
chargers, calculating ETAs, monitoring traffic and keeping passengers informed. We rely on the following tools and technologies: Python for our backend code Postgres for data storage Redis for caching Kafka for stream processing React for our frontend Clickhouse for analytics SwiftUI for our Driver iOS App AWS, Terraform, GitLab CI/CD, Docker and ECS to deploy and run More ❯
experience with deployment, configuration, and troubleshooting in live production systems. Experience with Messaging Systems: You have experience with distributed systems that use some form of messaging system (e.g. RabbitMQ, Kafka, Pulsar, etc). The role is focusing on RabbitMQ and you will have time to acquire deep knowledge in it. Programming Proficiency: You have some proficiency in at least More ❯
hyperparameter tuning, and model versioning. Strong social media data extraction and scraping skills at scale (Twitter v2, Reddit, Discord, Telegram, Scrapy, Playwright). Experience with real-time streaming systems (Kafka, RabbitMQ) and ingesting high-velocity data. Deep data-engineering expertise across Postgres, Redis, InfluxDB, and ClickHouse-schema design, indexing, and caching for sub-second reads. Experience deploying microservices in More ❯
understanding of relational databases (e.g., PostgreSQL). Bonus: Advanced LookML knowledge and experience building data visualisation tools. Skilled in building and managing real-time and batch data pipelines using Kafka and DBT. Familiarity with Docker, Terraform, and Kubernetes for application orchestration and deployment. A strong numerical or technical background, ideally with a degree in mathematics, physics, computer science, engineering More ❯
strengthen an application: Passion for transportation or sustainable technologies Deeper experience with parts of our stack, eg Go, Typescript, react Terraform or other Infrastructure as Code tooling Exposure to Kafka, event driven architectures, or message queues Familiarity with HashiCorp Vault or other secrets management tooling Deeper knowledge of CI/CD pipelines Experience in a start-up or scale More ❯
Whetstone, Greater London, UK Hybrid / WFH Options
Viasat
cloud native architecture and cloud integration with telco services. Comfortable with virtualisation and container orchestration technology. Experience designing RESTful APIs. Experience with streaming and messaging systems such as gRPC, Kafka and RabbitMQ. Experience designing and interfacing with user portals. Experience with monitoring, telemetry and observability technology and patterns. Understanding of BSS/OSS systems and their integration with network More ❯
Spark and Databricks AWS services (e.g. IAM, S3, Redis, ECS) Shell scripting and related developer tooling CI/CD tools and best practices Streaming and batch data systems (e.g. Kafka, Airflow, RabbitMQ) Additional Information Health + Mental Wellbeing PMI and cash plan healthcare access with Bupa Subsidised counselling and coaching with Self Space Cycle to Work scheme with options More ❯
cloud computing. • You have additional nice-to-have experience in the following areas: database engines (Microsoft SQL Server, Aerospike, Vertica, Redis), building micro-services, operating systems and cloud, Kubernetes, Kafka, EMR, Spark. A variety of technical opportunities is one of the best things about working at The Trade Desk as a software engineer, which is why we do not More ❯
cloud computing. You have additional nice-to-have experience in the following areas: database engines (Microsoft SQL Server, Aerospike, Vertica, Redis), building micro-services, operating systems and cloud, Kubernetes, Kafka, EMR, Spark. A variety of technical opportunities is one of the best things about working at The Trade Desk as a software engineer, which is why we do not More ❯
Testing performance with JMeter or similar tools Web services technology such as REST, JSON or Thrift Testing web applications with Selenium WebDriver Big data technology such as Hadoop, MongoDB, Kafka or SQL Network principles and protocols such as HTTP, TLS and TCP Continuous integration systems such as Jenkins or Bamboo What you can expect: At Global Relay, there's More ❯
to ensure our systems are trusted, reliable and available . The technology underpinning these capabilities includes industry leading data and analytics products such as Snowflake, Tableau, DBT, Talend, Collibra, Kafka/Confluent , Astronomer/Airflow , and Kubernetes . This forms part of a longer-term strategic direction to implement Data Mesh, and with it establish shared platform s that More ❯