Relevant degree in a numeric discipline, or equivalent work experience • Excellent written and spoken English Added bonus if you have: • Oracle Experience • Experience of event-driven distributed messaging (e.g. Kafka) • Experience of financial markets and the trade lifecycle beneficial • C# and any GUI development experience If you believe you have the experience required, please apply with your CV now More ❯
Relevant degree in a numeric discipline, or equivalent work experience • Excellent written and spoken English Added bonus if you have: • Oracle Experience • Experience of event-driven distributed messaging (e.g. Kafka) • Experience of financial markets and the trade lifecycle beneficial • C# and any GUI development experience If you believe you have the experience required, please apply with your CV now More ❯
Relevant degree in a numeric discipline, or equivalent work experience • Excellent written and spoken English Added bonus if you have: • Oracle Experience • Experience of event-driven distributed messaging (e.g. Kafka) • Experience of financial markets and the trade lifecycle beneficial • C# and any GUI development experience If you believe you have the experience required, please apply with your CV now More ❯
Experience with Git Source Control System Position Desired Skills • Experience with the Atlassian Tool Suite (JIRA, Confluence) • Experience with Log Management tools including Syslog-ng and rsyslog • Experience with Kafka Messaging Framework Active Clearance Required: TS/SCI with Full Scope Polygraph Pay: $97.00 to $111.00 per hour range, depending on PTO selection - PLUS generous sign-on bonus and More ❯
in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services More ❯
in ML, data science and MLOps. Nice-to-Have : Built agentic workflows/LLM tool-use. Experience with MLFlow, WandB, LangFuse, or other MLOps tools. Experience with AirFlow, Spark, Kafka or similar. Why Plexe? Hard problems: we're automating the entire ML/AI lifecycle from data engineering to insights. High ownership: first 5 engineers write the culture as More ❯
in ML, data science and MLOps. Nice-to-Have : Built agentic workflows/LLM tool-use. Experience with MLFlow, WandB, LangFuse, or other MLOps tools. Experience with AirFlow, Spark, Kafka or similar. Why Plexe? Hard problems: we're automating the entire ML/AI lifecycle from data engineering to insights. High ownership: first 5 engineers write the culture as More ❯
in ML, data science and MLOps. Nice-to-Have : Built agentic workflows/LLM tool-use. Experience with MLFlow, WandB, LangFuse, or other MLOps tools. Experience with AirFlow, Spark, Kafka or similar. Why Plexe? Hard problems: we're automating the entire ML/AI lifecycle from data engineering to insights. High ownership: first 5 engineers write the culture as More ❯
Newcastle Upon Tyne, United Kingdom Hybrid / WFH Options
Accenture
coding standards, Test Driven Development, Continuous Integration, and Continuous Testing. Help the integration of application development across various full stack technologies, including Power Platform, Messaging services, Rabbit MQ, API, Kafka live streaming or similar. Manage the integration of application with high availability analytics and business intelligent products, ALM tooling's, application monitoring systems. Implement standards and best practices, mentor More ❯
and services 5+ Years of overall software engineering experience Experience with tech stack including: Language: Python, Golang Platform: AWS Framework: Django, Spark Storage/Data Pipelines: Postgres, Redis, ElasticSearch, Kafka, Parquet Nice To Have Prior exposure to production machine learning systems. More ❯
completa para cada implementación, asegurando su trazabilidad y mantenibilidad. Valoramos si tienes: Experiencia con tecnologías de contenedores y orquestación como Docker y Kubernetes. Conocimientos de sistemas de mensajería como Kafka, RabbitMQ u otros similares. Experiencia en banca o entornos altamente regulados. Familiaridad con herramientas de monitoreo y logging (por ejemplo, Prometheus, Grafana, ELK stack). Conocimiento de metodologías Ágiles More ❯
backend development and React for frontend. Solid understanding of PostgreSQL and database optimization. Proficiency in building and consuming APIs (REST). Familiarity with asynchronous processing and message queues (RabbitMQ, Kafka). Bonus: Experience with third-party API integrations, marketplace platforms, or travel technology. You value clean code, reusable components, documentation, and cross-team collaboration. WHY JOIN TICKETDOOR? A collaborative More ❯
backend development and React for frontend. Solid understanding of PostgreSQL and database optimization. Proficiency in building and consuming APIs (REST). Familiarity with asynchronous processing and message queues (RabbitMQ, Kafka). Bonus: Experience with third-party API integrations, marketplace platforms, or travel technology. You value clean code, reusable components, documentation, and cross-team collaboration. WHY JOIN TICKETDOOR? A collaborative More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Adecco
a numeric discipline or equivalent work experience. Excellent written and spoken English skills. Added Bonus if You Have: Experience with Oracle. Familiarity with event-driven distributed messaging systems (e.g., Kafka). Knowledge of financial markets and the trade lifecycle. Experience with C# and any GUI development. Why Join Us? In addition to a competitive daily rate, our client offers More ❯
Work With This business doesn’t do “just one stack”. You’ll be expected to work across a broad tech landscape: Big Data & Distributed Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, Delta Lake, Azure Data Lake More ❯
Create and maintain Forms, Reports, Views, Workflows, Groups and Roles Create, maintain and enhance Dashboards and reporting, including scheduled reports Create and configure tool integrations with ServiceNow (Elastic, Netcool, Kafka, etc) Coordinate and support application and platform upgrades Assist with design, creation and cataloging of business process flows Work with other members of the ServiceNow development team to propose More ❯
Work With This business doesn’t do “just one stack”. You’ll be expected to work across a broad tech landscape: Big Data & Distributed Systems: HDFS, Hadoop, Spark, Kafka Cloud: Azure or AWS Programming: Python, Java, Scala, PySpark – you’ll need two or more, Python preferred Data Engineering Tools: Azure Data Factory, Databricks, Delta Lake, Azure Data Lake More ❯
staff by paying very strong salaries, and working hard to ensure each Engineer is doing work that aligns with their career interest. The Role Software Development, Python, ETL, NiFi, Kafka, Logstash, SQL build modular systems, new tech stack integrations Required Skills Demonstrated experience identifying and validating requirements for Extract, Transform, and Load systems. Demonstrated experience developing software. Demonstrated experience More ❯
CI/CD methodologies and tools, including GitLab CI Proficiency in Git Source Control System Nice to Have: Experience with the Atlassian Tool Suite (JIRA, Confluence) Familiarity with the Kafka Messaging Framework About us: We are an experienced advanced analytic development company providing Cyber solutions to current and emerging missions. Our Core Values of Honesty, Integrity, Loyalty, Fairness, Respect More ❯
data sources, processing, or technologies. Enhancements shall improve functionality, efficiency, speed, automation, and facilitate conversion to the Customer cloud environment Qualifications Experience with React , Nodes.js/Express.js, PostgreSQL, Java, Kafka, and Kubernetes. Experience using RTI DDS a plus. Experienced working in Agile. TS/SCI preferred Secret Required. Will require upgraded to TS/SCI May have option to More ❯
practices, and tools Experience with software frameworks used for searching, monitoring, and analyzing big data such as Elastic Stack, Splunk, and Prometheus Experience developing with messaging frameworks such as Kafka, JMS, RabbitMQ, ActiveMQ Desired Skills & Abilities Experience with SQL technologies such as MySQL, MariaDB, and PostgreSQL Experience with Key/Value and Time-series databases such as OpenTSDB Experience More ❯
delivery, authentication/authorisation, telemetry/observability/monitoring. A working understanding of messaging in event-driven systems, which implies some experience using tools such as NATS, RabbitMQ, or Kafka for example. Some experience, professional or otherwise, developing applications for Kubernetes. CKAD is a plus, but by no means a requirement. What we offer Be part of something larger More ❯
Looking For Core Skills Containers & Orchestration: Strong expertise in container security and Kubernetes (multi-cluster/global deployment is a plus). Distributed Systems & Messaging: Knowledge of clusters, storage, Kafka, Aeron, and experience with multicast or HPC. Automation & IaC: Proficiency in Python, Golang, or Rust with experience in IaC tools and immutable infrastructure. Continuous Delivery & Config Management: Familiarity with More ❯
including Git, BitBucket, Confluence, JIRA, etc. Some other highly valued skills may include: Ability to comprehend, implement, and influence complex problems and solutions with an analytical approach. Familiarity with Kafka and working knowledge of UNIX platforms. Familiarity with utilising Agile Development methodologies, Test Driven Development and Continuous Delivery. This role will be based in our London office. Purpose of More ❯
or OLAP stores Deep Ruby/Rails & ActiveRecord expertise Exposure to ClickHouse/Redshift/BigQuery Hands-on MySQL tuning (indexes, partitioning, query plans) Event-driven or stream processing (Kafka, Kinesis) Proven record scaling background-job frameworks (Sidekiq, Resque, Celery, etc.) Familiarity with data-viz pipelines (we use Highcharts.js) AWS production experience (EC2, RDS, IAM, VPC) Contributions to OSS More ❯