InsurTech with cutting-edge machine learning and AI? At Simply Business, we're not just using data; we're continuously evolving our platform with technologies like AWS, Snowflake, and Kafka to drive real value and inform company strategy. As a leading player in the market, our mission is to remain at the forefront of data engineering, ML, and AI … continuously evolving our class-leading data and ML platform infrastructure, balancing maintenance with exciting greenfield projects. develop and maintain our real-time model serving infrastructure, utilising technologies such as Kafka, Python, Docker, Apache Flink, Airflow, and Databricks. Actively assist in model development and debugging using tools like PyTorch, Scikit-learn, MLFlow, and Pandas, working with models from gradient boosting More ❯
skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data Pipelines: Experience with batch and streaming frameworks (e.g., Apache Spark, Flink, Kafka Streams, Beam), including orchestration via Airflow, Prefect or Dagster. Data Modeling & Schema Management: Demonstrated expertise in designing, evolving, and documenting schemas (OLAP/OLTP, dimensional, star/snowflake, CDC … data contracts, and data cataloguing. API & Integration Fluency: Building data ingestion from REST/gRPC APIs, file drops, message queues (SQS, Kafka), and 3rd party SaaS integrations, with idempotency and error handling. Storage & Query Engines: Strong with RDBMS (PostgreSQL, MySQL), NoSQL (DynamoDB, Cassandra), data lakes (Parquet, ORC), and warehouse paradigms. Observability & Quality: Deep familiarity with metrics, logging, tracing, and More ❯
Database Management System (RDBMS) Web app server Familiar (Experience preferred) with open source web application server such as Apache HTTP, Tomcat Middle tier - messaging systems Experience using the following: Kafka - or any Java-based message broker Active MQ - product we are using that is java based message broker or similiar Experience in development and usage with RESTful web services … in Relational Database Management. 10 Years' Experience with Angular 3+ front-end web application platform and PrimeNG UI components or similar web framework. Experience with middle tier messaging systems, Kafka, Active MQ or similiar and development and usage with RESTful web services Active Top Secret or TS/SCI security clearance. Need to have Current Poly or consent to More ❯
Build scalable services with Kotlin, and deal with problems like synchronization, asynchronous operations, database optimisations, scalability and reliability of systems Gain exposure to an array of technologies such as Kafka, PostgreSQL, Redis, Docker etc Optimise existing systems for scalability, extensibility and performance whilst building out reusable, modular code for use across Blockchain's products Ensure security is at the … strong technical documentation and effective monitoring You inspire other engineers to do better Understanding of data structures, databases and large-scale distributed systems Preferably exposure to technologies such as Kafka, PostgreSQL, Redis, RabbitMQ You are customer focused and continuously suggest how the backend can provide the best Customer Experience A passion for crypto and the transformations it enables We … use Kotlin, PostgreSQL, Kafka, Redis, Datadog, Amplitude, Grafana, BigQuery, ApacheSpark and more COMPENSATION & PERKS Unlimited vacation policy; work hard and take time when you need it Unlimited learning policy; order the technical resources you need or simply pick something up from our company library Apple equipment Full-time salary based on experience and meaningful equity in an industry-leading More ❯
Inside IR35 Support the standing up multiple environments on AWS Support the management of the AWS stack/Git pipelines across mix of React front-end, microservices, lambda functions, Kafka integration, possible mix of Transit Gateway/Private Link, use of Kong EE Skills required: REQUIRED/NON-NEGOTIABLE: - Full AWS stack (inc. Lambda, SQS, SNS) - IAM management for … pipelines and users - Terraform or Cloud Formation - CloudWatch - Kubernetes NICE TO HAVE: - Kafka - Kong EE - Micro-UI patterns (jwt tokenisation and passthrough More ❯
database systems to the next level. We're looking for a highly-technical, hands-on engineer, who loves to work with data plane services like Cassandra, ElasticSearch/Opensearch, Kafka, Redis, Valkey, MySQL, PostgreSQL and is comfortable building automation around large-scale cloud-based critical systems. We'll be looking at candidate CVs with an eye on achievement what … can do for us in the future. Focus area: OpenSearch and ElasticSearch What You'll Do: Maintain a deep understanding of the data components - including Cassandra, ElasticSearch/OpenSearch, Kafka, Zookeeper, MySQL and PostgreSQL, Redis, Valkey, Memcache, Pulsar, SQS and use that understanding to operate and automate properly configured clusters. Understanding of operating databases in kubernetes and managing container … data safe, secure, and available. What You'll Need: Configuration management (Chef) Scripting in Python and bash Experience with large scale datastores using technologies like Cassandra, ElasticSearch/Opensearch, Kafka, Zookeeper,MySQL, PostgreSQL, Redis, Valkey, Memcache, Pulsar, SQS Experience with large-scale, business-critical Linux environments Experience operating within the cloud, preferably Amazon Web Services, GCP and OCI Proven More ❯
Currently, our micro-services communicate via REST API calls, fostering seamless integration between different services. You will actively participate in our ongoing transition towards an event-driven architecture, utilising Kafka as a core component. We are proud to say that our engineers at ClearScore are world class and at the heart of making this mission a reality for our … millions of users. The Typelevel Stack (Cats, Cats Effect, http4s, Circe), Kafka, SBT and occasionally Akka HTTP A world-class SRE team who champion our "you build it you run it" principle, empowering our developers to work with AWS, Kubernetes and Spinnaker A Quality Assistance programme to build, test, release and monitor your own work TDD and peer-reviewing More ❯
strong commitment to user-centred design and agile delivery, and more to deliver innovative digital services that matter Preferred Tech Stack Expertise Cloud Infrastructure : AWS (EKS, RDS, Aurora, ElastiCache, Kafka, IAM) Secure Hosting : Experience working with air-gapped or government-secure environments Secrets & Identity Management : HashiCorp Vault, Keycloak Automation : IaC, pipeline build automation, event relay tooling Scripting : Bash, Python … on-call support . Ensure all services are compliant with security standards and support the change and release governance model . Build and maintain infrastructure components like event streaming (Kafka), databases (Aurora, RDS, Redis), identity management (Keycloak), and caching layers. Enhance and maintain CI/CD tooling and self-service developer pipelines for tenant teams. Proactively manage and resolve More ❯
State/Region/Province London Country United Kingdom Domain Delivery Interest Group Company ITL UK Requisition ID 137810BR Role - Senior Technology Architect Technology - Azure Data Factory, ADLS, Snowflake, Kafka, Power BI, Confluent Cloud, MS Fabric, Python/Pyspark, MSBI(SSIS) Location - London Business Unit - DNA Compensation - Competitive (including bonus) Job Description We are looking for a Lead Data … client satisfaction. Conduct product demos, proof-of-concept workshops, and prepare effort estimates aligned with client budgets. Job description Role - Senior Technology Architect Technology - Azure Data Factory, ADLS, Snowflake, Kafka, Power BI, Confluent Cloud, MS Fabric, Python/Pyspark, MSBI(SSIS) Location - London Business Unit - DNA Compensation - Competitive (including bonus) Job Description We are looking for a Lead Data … expertise in Azure Data Factory, ADLS, Snowflake Mandatory: Strong hands-on experience in at least one of these technologies. Preferred: Experience in more than one is a strong advantage. Kafka, Power BI, Confluent Cloud, Microsoft Fabric Mandatory: Strong hands-on experience in at least one of these technologies. Preferred: Experience in multiple is a plus. Python/PySpark, MSBI More ❯
written in Dart with Flutter and available on both Android and iOS Our NALA for Business product is web only and written in React and Typescript. We use Postgres, Kafka, Redis and Vault We use and leverage AWS as much as possible and we manage it with Terraform We write unit and integration tests, do code reviews and deploy … least 5+ years of experience building highly reliable and scalable backend services in Go Experience with RDBMSs such as Postgres, MySQL etc. Experience with message-brokers technologies such as Kafka, RabbitMQ etc, working within event-driven architectures. You have excellent knowledge of the best practices in designing, developing and deploying those services in a cloud environment You have experience More ❯
functions Work with sophisticated analytics tools that are custom built by our team of developers Use the latest technology stacks such as AWS, Java 17, Python 3, HDF5, Kubernetes, Kafka and Argo Is globally distributed across Europe and North America supporting global systems. Our team cares deeply about preserving a respectful and diverse team culture. We believe in choosing … software (git or svn) Experience with cloud computing, AWS preferred SQL experience including queries/updates/table creation/basic database maintenance Exposure to data technologies such as Kafka, Spark or Delta Lake is useful but not mandatory. For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice More ❯
/Elixir code Polyglot - All our services are built-in Ruby, Elixir, GraphQL federation or Typescript, depending on which language best suits the solution Messaging - For communication, we use Kafka for events and gRPC or JSON for synchronous calls. Kubernetes - All our services run in Kubernetes. Migration - We are in the process of switching away from our Ruby monolith … or TypeScript Distributed Systems - You understand how to build, deploy and maintain a globally distributed system. Event-driven architecture - Knowledge of event-driven systems and tools/protocols like Kafka, and gRPC will be a plus. Experience - Have experience ( 3+ years) working on internal product engineering teams, developer tools, developer productivity or infrastructure products at scale. Adaptable - Are a More ❯
data processing and reporting. In this role, you will own the reliability, performance, and operational excellence of our real-time and batch data pipelines built on AWS, Apache Flink, Kafka, and Python. You'll act as the first line of defense for data-related incidents , rapidly diagnose root causes, and implement resilient solutions that keep critical reporting systems up … call escalation for data pipeline incidents, including real-time stream failures and batch job errors. Rapidly analyze logs, metrics, and trace data to pinpoint failure points across AWS, Flink, Kafka, and Python layers. Lead post-incident reviews: identify root causes, document findings, and drive corrective actions to closure. Reliability & Monitoring Design, implement, and maintain robust observability for data pipelines … Architecture & Automation Collaborate with data engineering and product teams to architect scalable, fault-tolerant pipelines using AWS services (e.g., Step Functions , EMR , Lambda , Redshift ) integrated with Apache Flink and Kafka . Troubleshoot & Maintain Python -based applications. Harden CI/CD for data jobs: implement automated testing of data schemas, versioned Flink jobs, and migration scripts. Performance Optimization Profile and More ❯
person insight into their missions, workflows, and perspectives, then utilize that knowledge to inform the platform's design. Core technical tasks include: REST API development in Java, working within Kafka streams to process and transform data, and general Java development to build and maintain the product. Responsibilities: Contribute to the development of enterprise-grade software solutions. Build and maintain … proficient with the project's graph database and develop complex database queries. Required Skills: Experience using Java to build enterprise products and applications. Knowledge of streaming analytic platforms like Kafka, RabbitMQ, Spark, etc. Familiarity with Extract, Transform, Load (ETL) software patterns to ingest large and complex datasets. Familiarity with Git and GitLab CI/CD. Understanding of common Enterprise … Integration Patterns (EIP) and how to apply them Desired Skills: Experience with graph databases such as Neo4j. Experience building real-time data processing applications using streaming libraries like Kafka Streams. Experience modeling data and relationships in graph databases. Experience with networking concepts, protocols, and analysis (routers, switches, etc.). Knowledge of SIGINT collection and analysis systems. Experience with production More ❯
time advertising bidding system handles over 3,000,000 requests per second and stores several terabytes of data every day.Our technologies include Go, Ruby on Rails, Aerospike, Redis, Elasticsearch, Kafka, RocksDB, Redshift, ScyllaDB, GraphQL and others. We're not afraid to test and try new technologies. Watch our talk at Amazon Tech Talks: StackAdapt is a Remote First company … services primarily written in Go Working with large data sets and various databases including Aerospike, Elasticsearch, Redis, ScyllaDB, Redshift, TiDB, MariaDB Build software that utilize messaging queues such as Kafka, SQS, and Kinesis Write performance efficient and memory optimized code We'll be reaching out to candidates that have: 5+ years of experience as a Backend Software Engineer. Very More ❯
Role: Data Scientist Clearance: TS/SCI Clearance Location: (Washington DC/Northern Virginia) Salary: $160k-$200k + Shares and bonus My client is an innovative company delivering cutting-edge AI and data analytics solutions through an advanced AI platform. More ❯
The successful candidate will be have extensive experience working on complex systems within highly regulated environments. Needs to display hands-on skills in multi-threaded Java, Sharded Mongo DB, Kafka and ability to do the heavy lifting for the system to provide the scale and performance. By applying to this job you are sending us your CV, which may More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
trusted partner across a wide range of businesses.In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure.The ideal candidate will have … a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python.This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and performance. The … Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS*Act as an escalation point for critical data incidents and lead root cause analysis*Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions The Person: *Experience in More ❯
partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and … performance. The Role: Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS Act as an escalation point for critical data incidents and lead root cause analysis Optimising system performance, define SLIs/SLOs, and drive reliability Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions The Person More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
partner across a wide range of businesses. In this role you'll take ownership of the reliability and performance of large-scale date pipelines built on AWS, Apache Flink, Kafka, and Python. You'll play a key role in diagnosing incidents, optimising system behaviour, and ensuring reporting data is delivered on time and without failure. The ideal candidate will … have a strong experience working with streaming and batch data systems, a solid understanding of monitoring a observability, and hands-on experience working with AWS, Apache Flink, Kafka, and Python. This is a fantastic opportunity to step into a SRE role focused on data reliability in a modern cloud native environment, with full ownership of incident management, architecture, and … performance. The Role: *Maintaining and monitoring real-time and batch data pipelines using Flink, Kafka, Python, and AWS *Act as an escalation point for critical data incidents and lead root cause analysis *Optimising system performance, define SLIs/SLOs, and drive reliability *Woking closely with various other departments and teams to architect scalable, fault-tolerant data solutions The Person More ❯
mission and who want to share this journey with us. About the Role We are looking for a Customer Success Engineer who is eager to deep dive into ApacheKafka and apply their expertise to support our customers in solving their most challenging problems. Our control panel is packed with intriguing features, as well as advanced technology that reimagines … ApacheKafka systems. Through cultivating a deep understanding of our products, you'll strategise on how to best support our customers to provide them with the most value. What will you be doing? Onboarding new enterprise customers with Conduktor's range of products Working closely with customers to train and enable them to use Conduktor products Providing premier support … articles Advocating on the customer's behalf and providing feedback to our Product/Engineering team to improve our solutions and product roadmap Continuously learning about our products, ApacheKafka, and the data streaming ecosystem to update our teams and help our customers What experience are we looking for? You have always been deeply interested in technology You're More ❯
South West London, London, United Kingdom Hybrid / WFH Options
Anson Mccade
drive architectural best practices and deliver high-quality data solutions. This is an opportunity to own technical delivery, influence client architecture, and work with cutting-edge technologies such as Kafka, Databricks, Unity Catalog, and Cloud platforms like AWS, Azure, and GCP . Key Skills of the Lead Data Solution Architect: Proven experience as a Lead Data Solutions Architect , leading … end-to-end delivery of complex data solutions. Extensive expertise in data architecture design , including event-driven architectures (Kafka) and Data Lake/Lakehouse platforms . Hands-on experience with cloud platforms like AWS, Azure, GCP, or Snowflake. Strong knowledge of data governance, compliance, and security standards (GDPR, CCPA). Proficiency in big data technologies like Apache Spark and More ❯
SR2 | Socially Responsible Recruitment | Certified B Corporation™
Software Engineer (Python) 3 year programme | Inside IR35 | Hybrid Python | C++| Urban Digital Twins | Model Optimisation | Simulation Engineering | Kafka | Production ML SR2 is working with a global consultancy on a ground-breaking urban digital twins project for a major city modernising its infrastructure. With significant investment backing, this programme is looking at how to optimise everything from foot traffic … twin platform to simulate and forecast city infrastructure outcomes Collaborate across simulation, data, and software teams to turn prototypes into production-ready solutions (Bonus) Integrate streaming data pipelines using Kafka to support real-time modelling Experience: Strong commercial experience in Python engineering Exposure to C++ , especially in simulation, modelling, or high-performance systems Proven track record working closely with … data scientists to bring models into production Background in simulation-heavy domains (e.g. finance, oil & gas, energy, transport) Experience with Kafka or distributed messaging systems is highly desirable Systems thinker — interested in how predictive models drive real-world infrastructure impact The Details: Inside IR35 £600-650p/d 2 days per week in central London Start: ASAP More ❯
SR2 | Socially Responsible Recruitment | Certified B Corporation™
Software Engineer (Python) 3 year programme | Inside IR35 | Hybrid Python | C++| Urban Digital Twins | Model Optimisation | Simulation Engineering | Kafka | Production ML SR2 is working with a global consultancy on a ground-breaking urban digital twins project for a major city modernising its infrastructure. With significant investment backing, this programme is looking at how to optimise everything from foot traffic … twin platform to simulate and forecast city infrastructure outcomes Collaborate across simulation, data, and software teams to turn prototypes into production-ready solutions (Bonus) Integrate streaming data pipelines using Kafka to support real-time modelling Experience: Strong commercial experience in Python engineering Exposure to C++ , especially in simulation, modelling, or high-performance systems Proven track record working closely with … data scientists to bring models into production Background in simulation-heavy domains (e.g. finance, oil & gas, energy, transport) Experience with Kafka or distributed messaging systems is highly desirable Systems thinker — interested in how predictive models drive real-world infrastructure impact The Details: Inside IR35 £600-650p/d 2 days per week in central London Start: ASAP More ❯
development is supported, and technical impact is real. The Role: Manage and optimise AWS and Kubernetes (EKS) infrastructure Implement resilience strategies and conduct chaos engineering experiments Monitor and maintain Kafka clusters for performance and reliability Respond to and resolve application-level production incidents The Person: 5+ years in SRE, DevOps, or infrastructure engineering Strong experience with AWS, EKS/… Kubernetes, and Terraform Familiar with Kafka and observability tools like Datadog or Grafana Able to troubleshoot issues across infrastructure and application layers Reference number: BBBH259300 To apply for this role or for to be considered for further roles, please click "Apply Now" or contact Tommy Williams at Rise Technical Recruitment. Rise Technical Recruitment Ltd acts an employment agency for More ❯