quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
london (city of london), south east england, united kingdom
Hadte Group
quickly, and delays of even milliseconds can have big consequences. Essential skills: 3+ years of experience in Python development. 3+ with open-source real-time data feeds (Amazon Kinesis, Apache Kafka, Apache Pulsar or Redpanda) Exposure building and managing data pipelines in production. Experience integrating serverless functions (AWS, Azure or GCP). Passion for fintech and building products More ❯
Responsibilities: Design and implement data lakehouse solutions on AWS using Medallion Architecture (Bronze/Silver/Gold layers). Build and optimize real-time and batch data pipelines leveraging Apache Spark, Kafka, and AWS Glue/EMR. Architect storage and processing layers using Parquet and Iceberg for schema evolution, partitioning, and performance optimization. Integrate AWS data services (S3, Redshift … guidance to engineering teams. Required Skills & Experience: Core Technical Expertise Strong hands-on skills in AWS Data Services (S3, Redshift, Glue, EMR, Kinesis, Lake Formation, DynamoDB). Expertise in Apache Kafka (event streaming) and Apache Spark (batch and streaming). Proficiency in Python for data engineering and automation. Strong knowledge of Parquet, Iceberg, and Medallion Architecture. Finance & Capital More ❯
variety of tool sets and data sources. Data Architecture experience with and understanding of data lakes, warehouses, and/or streaming platforms. Data Engineering experience with tooling, such as Apache Spark and Kafka, and orchestration tools like Apache Airflow or equivalent. Continuous Integration/Continuous Deployment experience with CI/CD tools like Jenkins or GitLab tailored for More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
london (city of london), south east england, united kingdom
Infosys
Role – Technology Lead/Confluent Consulting Engineer Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying … ecosystem will be given preference: • Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center • Hands-on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink • Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC • Confluent certifications (Developer, Administrator, or Flink Developer) • Experience with Confluent Platform, Confluent Cloud managed services, multi-cloud More ❯
data ingestion, Databricks for ETL , modelling data for Power BI, and working closely with stakeholders to create products that drive smarter decisions. Building and optimising data ingestion pipelines using Apache Spark (ideally in Azure Databricks) Collaborating across teams to understand requirements and deliver fit-for-purpose data products Supporting the productionisation of ML pipelines Working in an Agile/… services and DevOps (CI/CD) Knowledge of data modelling (Star Schema) and Power BI Bonus points for: Real-time data pipeline experience Azure Data Engineer certifications Familiarity with Apache Kafka or unstructured data (e.g. voice) Ready to shape the future of data at Reassured? If you're excited by the idea of building smart, scalable solutions that make More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: • Lead the migration of existing AWS-based data pipelines to Databricks. • Design and implement scalable data engineering solutions using Apache Spark on Databricks. • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. • Optimize performance and cost-efficiency of Databricks workloads. • Develop and maintain … best practices for data governance, security, and access control within Databricks. • Provide technical mentorship and guidance to junior engineers. Must-Have Skills: • Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). • Proven track record of building and optimizing data pipelines in cloud environments. • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain … best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM More ❯
Edinburgh, Midlothian, United Kingdom Hybrid / WFH Options
Aberdeen Group
API-driven architectures. Oversee data governance initiatives including metadata management, data quality, and master data management (MDM). Evaluate and integrate big data technologies and streaming platforms such as Apache Kafka and Apache Spark. Collaborate with cross-functional teams to align data architecture with business goals and technical requirements. About the candidate Exceptional stakeholder engagement, communication, and organisational More ❯
data modeling (star schema, snowflake schema). Version Control Practical experience with Git (branching, merging, pull requests). Preferred Qualifications (A Plus) Experience with a distributed computing framework like Apache Spark (using PySpark). Familiarity with cloud data services ( AWS S3/Redshift, Azure Data Lake/Synapse, or Google BigQuery/Cloud Storage ). Exposure to workflow orchestration … tools ( Apache Airflow, Prefect, or Dagster ). Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. More ❯
streaming architectures, to support advanced analytics, AI, and business intelligence use cases. Proven experience in designing architectures for structured, semi-structured, and unstructured data , leveraging technologies like Databricks, Snowflake, Apache Kafka , and Delta Lake to enable seamless data processing and analytics. Hands-on experience in data integration , including designing and optimising data pipelines (batch and streaming) and integrating cloud … based platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery ) with legacy systems, ensuring performance and scalability. Deep knowledge of ETL/ELT processes , leveraging tools like Apache Airflow, dbt, or Informatica , with a focus on ensuring data quality, lineage, and integrity across the data lifecycle. Practical expertise in data and AI governance , including implementing frameworks for data privacy, ethical More ❯
to deliver secure, efficient, and maintainable software solutions. • Implement and manage cloud infrastructure using AWS services. • Automate deployment and infrastructure provisioning using Terraform or Ansible. • Optimize application performance using Apache Spark for data processing where required. • Write clean, efficient, and maintainable code following best coding practices. • Troubleshoot, debug, and resolve complex technical issues in production and development environments. • Work … RDS, etc.). • Proficiency in Terraform or Ansible for infrastructure automation. • Working knowledge of Angular or similar UI frameworks. • Solid understanding of SQL and relational database design. • Experience with Apache Spark for distributed data processing (preferred). • Strong problem-solving, analytical, and debugging skills. • Excellent communication and teamwork abilities. Nice to Have • Experience in CI/CD pipelines, Docker More ❯
Level role! About the Role- We are looking for an experienced Senior Airflow Developer with over 5 years of experience to help transition our existing Windows scheduler jobs to Apache Airflow DAGs. In this role, you’ll play a critical part in modernizing and optimizing our task automation processes by converting existing jobs into efficient, manageable, and scalable workflows … security configurations for future reference and knowledge sharing within the team. Requirements- 5+ years of experience in working with scheduling tools, task automation, and job orchestration. 5+ experience in Apache Airflow: Experience authoring and managing DAGs, with a solid understanding of Airflow architecture and best practices. Experience with Airflow in Azure cloud environment. Strong Python Skills: Ability to write More ❯
and machine learning workflows. Design and implement scalable, secure, and high-performance data lake and data warehouse solutions. Pipeline Orchestration: Develop, monitor, and optimize ETL/ELT workflows using Apache Airflow. Ensure data pipelines are robust, error-tolerant, and scalable for real-time and batch processing. Data Scraping & Unstructured Data Processing: Develop and maintain scalable web scraping solutions to … or a related field; or equivalent professional experience. Experience: 5+ years of experience in data engineering or a related field. Strong expertise in data pipeline orchestration tools such as Apache Airflow . Proven track record of designing and implementing data lakes and warehouses (experience with Azure is a plus). Demonstrated experience with Terraform for infrastructure provisioning and management. More ❯
a key role in ensuring that all data systems comply with industry regulations and security standards while enabling efficient access for analytics and operational teams. A strong command of Apache NiFi is essential for this role. You will be expected to design, implement, and maintain data flows using NiFi, ensuring accurate, efficient, and secure data ingestion, transformation, and delivery. … business needs and compliance requirements. Maintain documentation of data flows and processes, ensuring knowledge sharing and operational transparency. Skills & Experience: You will have the following skills or proven experience: Apache NiFi Expertise: Deep understanding of core NiFi concepts: FlowFiles, Processors, Controller Services, Schedulers, Web UI. Experience designing and optimizing data flows for batch, real-time streaming, and event-driven More ❯
a key role in ensuring that all data systems comply with industry regulations and security standards while enabling efficient access for analytics and operational teams. A strong command of Apache NiFi is essential for this role. You will be expected to design, implement, and maintain data flows using NiFi, ensuring accurate, efficient, and secure data ingestion, transformation, and delivery. … business needs and compliance requirements. Maintain documentation of data flows and processes, ensuring knowledge sharing and operational transparency. Skills & Experience: You will have the following skills or proven experience: Apache NiFi Expertise: Deep understanding of core NiFi concepts: FlowFiles, Processors, Controller Services, Schedulers, Web UI. Experience designing and optimizing data flows for batch, real-time streaming, and event-driven More ❯
Role – Technology Architect/Confluent Solution Architect Technology – Apache Kafka, Confluent Platform, Stream Processing Location – UK, Germany, Netherlands, France & Spain Job Description Today, the corporate landscape is dynamic and the world ahead is full of possibilities! None of the amazing things we do at Infosys would be possible without an equally amazing culture, the environment where ideas can flourish … these values are upheld only because of our people. Your role As a Confluent Solution Architect, you will lead the design and architecture of enterprise-grade streaming solutions using Apache Kafka and the Confluent Platform. You will work closely with clients to understand business requirements, define integration strategies, and guide implementation teams in building scalable, secure, and resilient data … streaming ecosystems. Strongly Preferred: • Experience in designing and architecting solutions using Apache Kafka, with hands-on experience in Confluent Kafka • Ability to lead client engagements, translate business requirements into technical solutions, and guide implementation teams • Deep understanding of Kafka internals, KRaft architecture, and Confluent components • Experience with Confluent Cloud, Stream Governance, Data Lineage, and RBAC • Expertise in stream processing More ❯