london (city of london), south east england, united kingdom
Humand Talent
bring: Solid understanding of event-driven architecture and CQRS Experience working on high availability , high throughput platforms (ideally OLTP) Strong working knowledge of AWS services such as Aurora, MSK Kafka, ECS, and EMR A track record of building efficient, scalable database queries in MySQL and working with multiple data paradigms (RDBMS, Document, KV Stores) Proficiency with DevOps tools , Docker More ❯
environment where adaptability and teamwork are key. Qualifications & Experience: Proficiency in programming languages such as Python and SQL. Experience with data integration tools and technologies (e.g. DLT, Meltano, APIs, Kafka). Knowledge of cloud platforms such as AWS. Familiarity with modern data platforms (e.g. Databricks, Snowflake, or BigQuery). Strong communication and collaboration skills. Proven experience in data engineering More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Fintellect Recruitment
environment where adaptability and teamwork are key. Qualifications & Experience: Proficiency in programming languages such as Python and SQL. Experience with data integration tools and technologies (e.g. DLT, Meltano, APIs, Kafka). Knowledge of cloud platforms such as AWS. Familiarity with modern data platforms (e.g. Databricks, Snowflake, or BigQuery). Strong communication and collaboration skills. Proven experience in data engineering More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Fintellect Recruitment
environment where adaptability and teamwork are key. Qualifications & Experience: Proficiency in programming languages such as Python and SQL. Experience with data integration tools and technologies (e.g. DLT, Meltano, APIs, Kafka). Knowledge of cloud platforms such as AWS. Familiarity with modern data platforms (e.g. Databricks, Snowflake, or BigQuery). Strong communication and collaboration skills. Proven experience in data engineering More ❯
london, south east england, united kingdom Hybrid / WFH Options
Fintellect Recruitment
environment where adaptability and teamwork are key. Qualifications & Experience: Proficiency in programming languages such as Python and SQL. Experience with data integration tools and technologies (e.g. DLT, Meltano, APIs, Kafka). Knowledge of cloud platforms such as AWS. Familiarity with modern data platforms (e.g. Databricks, Snowflake, or BigQuery). Strong communication and collaboration skills. Proven experience in data engineering More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Fintellect Recruitment
environment where adaptability and teamwork are key. Qualifications & Experience: Proficiency in programming languages such as Python and SQL. Experience with data integration tools and technologies (e.g. DLT, Meltano, APIs, Kafka). Knowledge of cloud platforms such as AWS. Familiarity with modern data platforms (e.g. Databricks, Snowflake, or BigQuery). Strong communication and collaboration skills. Proven experience in data engineering More ❯
in one of Python, Scala or Java, with strong experience in Big Data technologies such as: Spark, Hadoop etc. Practical knowledge of building Real Time event streaming pipelines (eg, Kafka, Spark Streaming, Kinesis). Proven experience developing modern data architectures including Data Lakehouse and Data Warehousing. A solid understanding of CI/CD practices, DevOps tooling, and data governance More ❯
practices Skilled in Linux systems, networking, and containerised deployments (Docker) Experience with scripting (Python, Bash) and automation workflows Familiarity with data pipeline tools (e.g. Airflow, dbt, Glue, Kinesis, or Kafka) Understanding of observability: logging, alerting, and metrics (CloudWatch, Grafana, etc.) Nice to Have: Background in IoT, utilities, or environmental data platforms Experience with LPWAN technologies (e.g. LoRaWAN, NB-IoT More ❯
skills Ability to lead, influence, and thrive in a hybrid work culture Self motivated, proactive, and reliable Values empathy, authenticity, curiosity, and courage Our Technologies & Tools Java, Groovy, Grails, Kafka Postgres, Redis, MS SQL AWS, Kubernetes Jira, Dynatrace Gitlab, Cursor Our Recruiting Process We try to make our process as simple as possible while still giving us opportunities to More ❯
database design, optimization and query building. Familiarity with Python Basic understanding of service-oriented architecture (SOA) and event-driven architecture, including some exposure to message brokers (e.g., NATS, RabbitMQ, Kafka) Designing and developing RESTful APIs, with an understanding of basic API security and authentication mechanisms. Version control systems (e.g., Git) and some understanding of continuous integration/continuous deployment More ❯
understanding of relational databases (e.g., PostgreSQL). Bonus: Advanced LookML knowledge and experience building data visualisation tools. Skilled in building and managing real-time and batch data pipelines using Kafka and DBT. Familiarity with Docker, Terraform, and Kubernetes for application orchestration and deployment. A strong numerical or technical background, ideally with a degree in mathematics, physics, computer science, engineering More ❯
technical stakeholders Technical skills (a big plus): Knowledge of deep learning frameworks (PyTorch, TensorFlow), transformers, or LLMs Familiarity with MLOps tools (MLflow, SageMaker, Airflow, etc.) Experience with streaming data (Kafka, Kinesis) and distributed computing (Spark, Dask) Skills in data visualization apps (Streamlit, Dash) and dashboarding (Tableau, Looker) Domain experience in forecasting, optimisation, or geospatial analytics We would like to More ❯
monitoring, curation, release management, data quality monitoring, and user access control. Build and support components to manage ingest from and monitor distributed next-generation sequencing devices, including device telemetry (Kafka, MQTT streaming from Oxford Nanopore NGS devices). Design, develop, and maintain platform tools to help bioinformaticians and science teams discover, understand, and access data (e.g., pathogen data catalogue … and associated APIs meet the needs of platform tools. Work with product managers to capture requirements, wireframe solutions, and design user experiences. Work with big data technologies such as Kafka, Iceberg, and Parquet, and managed database technologies including PostgreSQL and Oracle vector databases. Ensure applications are secure. Operate, monitor, and maintain associated Oracle Cloud infrastructure to ensure platform tools … services. Knowledge of both relational SQL and NoSQL database systems. Skills in UX design (e.g., Figma). Knowledge of containerization using Kubernetes and Docker. Experience working with streaming technologies (Kafka). Proven experience developing scalable architectures using both containerized and serverless approaches (e.g., Oracle Functions or AWS Lambda). Familiarity with version management and best practices (Git). Experience More ❯
in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, ApacheKafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big data ecosystems. Familiar with version control tools More ❯
in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, ApacheKafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big data ecosystems. Familiar with version control tools More ❯
in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, ApacheKafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big data ecosystems. Familiar with version control tools More ❯
in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, ApacheKafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big data ecosystems. Familiar with version control tools More ❯
london (city of london), south east england, united kingdom
Vallum Associates
in Python programming, particularly using data manipulation and processing libraries such as Pandas, NumPy, and Apache Spark. Hands-on experience with open-source data frameworks like Apache Spark, ApacheKafka, and Apache Airflow. Strong proficiency in SQL, including advanced query development and performance tuning. Good understanding of distributed computing principles and big data ecosystems. Familiar with version control tools More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
london (city of london), south east england, united kingdom
Vallum Associates
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
is mandatory Responsibilities Develop backend applications built on the principles of Event Driven Micro Services Architecture. Required skills Python AWS - SNS/SQS Lambda Step Functions ECS Spinnaker Kubernetes Kafka Terraform ORM frameworks Nice to have Pyspark and Databricks experience is a plus. Knowledge and experience in the JPM morgan ecosystem/tools will carry higher value. More ❯
. Expertise in building RESTful APIs following company standards. Understanding of Domain-Driven Design and Modularization concepts. Asynchronous processing with approaches like co-routines, messages queuing and event streaming (Kafka). Experience working with relational databases (PostgreSQL) such as evolving schemas, transaction isolation levels and writing optimal SQL queries. Understanding caching patterns (Redis). Experience with Docker and similar More ❯