Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
london (city of london), south east england, united kingdom
Vallum Associates
Hands-on experience building and maintaining data ingestion pipelines Proven track record of optimising queries, code, and system performance Experience with open-source data processing frameworks (Apache Spark, ApacheKafka, Apache Airflow) Knowledge of distributed computing concepts and big data technologies Experience with version control systems (Git) and CI/CD practices Experience with relational databases (PostgreSQL, MySQL or More ❯
is mandatory Responsibilities Develop backend applications built on the principles of Event Driven Micro Services Architecture. Required skills Python AWS - SNS/SQS Lambda Step Functions ECS Spinnaker Kubernetes Kafka Terraform ORM frameworks Nice to have Pyspark and Databricks experience is a plus. Knowledge and experience in the JPM morgan ecosystem/tools will carry higher value. More ❯
. Expertise in building RESTful APIs following company standards. Understanding of Domain-Driven Design and Modularization concepts. Asynchronous processing with approaches like co-routines, messages queuing and event streaming (Kafka). Experience working with relational databases (PostgreSQL) such as evolving schemas, transaction isolation levels and writing optimal SQL queries. Understanding caching patterns (Redis). Experience with Docker and similar More ❯
etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. 7. Ability More ❯
etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. 7. Ability More ❯
etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. 7. Ability More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Syntax Consultancy Limited
Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate. Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins. More ❯
london (city of london), south east england, united kingdom
HCLTech
etl (extract, transform, load) processes and data warehousing. 3. Strong understanding of sql for data querying and validation. 4. Knowledge of big data technologies such as hadoop, spark, or kafka is a plus. 5. Familiarity with scripting languages like python, java, or shell scripting. 6. Excellent analytical and problem-solving skills with a keen attention to detail. 7. Ability More ❯
collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to improve platform Observability Experience in leading mid-scale teams with strong communication skills Experience in Machine Learning and GCP More ❯
collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to improve platform Observability Experience in leading mid-scale teams with strong communication skills Experience in Machine Learning and GCP More ❯
collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to improve platform Observability Experience in leading mid-scale teams with strong communication skills Experience in Machine Learning and GCP More ❯
london (city of london), south east england, united kingdom
HCLTech
collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to improve platform Observability Experience in leading mid-scale teams with strong communication skills Experience in Machine Learning and GCP More ❯
Greater London, England, United Kingdom Hybrid / WFH Options
InterEx Group
Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in working in a Multilanguage and multicultural environment Proactive, tech passionate and highly More ❯
london, south east england, united kingdom Hybrid / WFH Options
InterEx Group
Data implementation projects Experience in the definition of Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-sql databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc. Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.) Previous involvement in working in a Multilanguage and multicultural environment Proactive, tech passionate and highly More ❯
valuation engines. Strong knowledge of object-oriented programming, data structures, and design patterns. Familiarity with market risk, credit risk, or counterparty risk concepts. Experience with messaging systems (e.g., Solace, Kafka, or RabbitMQ) and distributed architecture. Solid understanding of multi-threaded and low-latency system design. Exposure to quant libraries, risk factor decomposition, or sensitivities is a strong plus. More ❯
valuation engines. Strong knowledge of object-oriented programming, data structures, and design patterns. Familiarity with market risk, credit risk, or counterparty risk concepts. Experience with messaging systems (e.g., Solace, Kafka, or RabbitMQ) and distributed architecture. Solid understanding of multi-threaded and low-latency system design. Exposure to quant libraries, risk factor decomposition, or sensitivities is a strong plus. More ❯
areas Qualifications and Requirements: 5+ years in senior data architecture roles with expertise in distributed systems, high availability, and enterprise leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies More ❯
areas Qualifications and Requirements: 5+ years in senior data architecture roles with expertise in distributed systems, high availability, and enterprise leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies More ❯
best practices (Git, Azure DevOps) Enforcing data governance using Azure Purview and Unity Catalog Optimising Spark jobs and SQL queries for performance and cost efficiency Exploring emerging tech like Kafka/Event Hubs and Knowledge Graphs What they're looking for: A strong communicator - someone who can build relationships across technical and business teams Hands-on experience building pipelines More ❯
areas Qualifications and Requirements: 5+ years in senior data architecture roles with expertise in distributed systems, high availability, and enterprise leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies More ❯
london (city of london), south east england, united kingdom
Computappoint
areas Qualifications and Requirements: 5+ years in senior data architecture roles with expertise in distributed systems, high availability, and enterprise leadership experience Proficiency in modern data platforms (Databricks, Snowflake, Kafka), container orchestration (Kubernetes/OpenShift), and multi-cloud deployments across AWS, Azure, GCP Advanced knowledge of Big Data ecosystems (Hadoop/Hive/Spark), data lakehouse architectures, mesh topologies More ❯
Modular provisioning, testing, and deployment patterns Kubernetes: Workload orchestration and container management CI/CD: GitHub Actions or Azure DevOps pipelines with end-to-end automation Event-Driven Architecture: Kafka or similar messaging systems Monitoring & Observability: Azure Monitor, Open Telemetry, Prometheus etc. Secure-by-Design Practices: Policy as Code, automated validation, compliance controls Nice to Haves Experience in regulated More ❯