end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
end. A self-starter mentality, with the ability to independently lead projects and solve complex problems. Nice to Have: Experience with data lake technologies such as Spark , Ray , or Iceberg . Familiarity with Kubernetes for container orchestration. Experience with dbt. Familiarity with Dagster for managing data pipelines. Background or experience in biotech or scientific domains is a plus, but More ❯
London, England, United Kingdom Hybrid / WFH Options
Modo Energy Limited
pipelines. Implement and optimize automation processes using infrastructure-as-code (Terraform) Build and maintain data pipelines using Airflow. Manage our tech stack including Python, Node.js, PostgreSQL, MongoDB, Kafka, and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and resolving production issues. Contribute to the development of new services as More ❯
Join to apply for the Data Platform Engineering Lead role at LSEG 3 days ago Be among the first 25 applicants Join to apply for the Data Platform Engineering Lead role at LSEG LSEG (London Stock Exchange Group) is one More ❯
engineers + external partners) on complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering software stack decisions, best practices, and … Deep experience with Software Engineering, cloud deployments (especially AWS), and orchestration technologies Proven delivery of big data solutions—managing high-volume, complex data (structured/unstructured) Experience with Databricks, ApacheIceberg, or similar modern data platforms Experience building software environments from scratch, setting standards and best practices Experience leading and mentoring teams Startup/scaleup background and adaptability More ❯
SQL. Hands-on experience with AWS services, including AWS Lambda, Glue, Step Functions, Fargate and CDK. Experience with Docker containerization and managing container images using AWS ECR. Experience with ApacheIceberg or a similar lakehouse engine. Knowledge of building custom ETL solutions. Data modeling and T-SQL experience for managing business data and reporting. Ability to perform technical More ❯
queries, converting data, mapping outputs, and designing multi-step pipelines. About you: Proficient in Python Experience in building complex data transformation pipelines Experience with Databricks at scale, preferably with Iceberg Familiarity with Airflow or Dagster Experience with AWS and open-source technologies on top of DataWeave Desirable: Exposure to medical data, especially video/image data, not just tabular More ❯
Science or a related field. Experience working on and shipping live service games. Experience working on Spring Boot projects. Experience deploying software/services on Kubernetes. Experience working with Apache Spark and Iceberg. More ❯
standards, and drive team alignment Work closely with stakeholders to translate business needs into scalable solutions Tech environment includes Python, SQL, dbt, Databricks, BigQuery, Delta Lake, Spark, Kafka, Parquet, Iceberg (If you haven’t worked with every tool, that’s totally fine — my client values depth of thinking and engineering craft over buzzword familiarity.) What they’re looking for More ❯
standards, and drive team alignment Work closely with stakeholders to translate business needs into scalable solutions Tech environment includes Python, SQL, dbt, Databricks, BigQuery, Delta Lake, Spark, Kafka, Parquet, Iceberg (If you haven’t worked with every tool, that’s totally fine — my client values depth of thinking and engineering craft over buzzword familiarity.) What they’re looking for More ❯
workflows is a plus. Hands-on experience with multi-terabyte scale data processing. Familiarity with AWS; Kubernetes experience is a bonus. Knowledge of data lake technologies such as Parquet, Iceberg, AWS Glue etc. Strong Python software engineering skills. Pragmatic mindset - able to evaluate tradeoffs find solutions that empower ML researchers to move quickly. Background in bioinformatics or chemistry is More ❯
geared towards a fantastic end-to-end engineering experience supported by excellent tooling and automation. Preferred Qualifications, Capabilities, and Skills: Good understanding of the Big Data stack (Spark/Iceberg). Ability to learn new technologies and patterns on the job and apply them effectively. Good understanding of established patterns, such as stability patterns/anti-patterns, event-based More ❯
pipelines. Implement and optimize automation processes using infrastructure-as-code (Terraform) Build and maintain data pipelines using Airflow. Manage our tech stack including Python, Node.js, PostgreSQL, MongoDB, Kafka, and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and resolving production issues. Contribute to the development of new services as More ❯
pipelines. Implement and optimize automation processes using infrastructure-as-code (Terraform) Build and maintain data pipelines using Airflow. Manage our tech stack including Python, Node.js, PostgreSQL, MongoDB, Kafka, and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and resolving production issues. Contribute to the development of new services as More ❯
London, England, United Kingdom Hybrid / WFH Options
Automata
and SQL for data processing, analysis and automation. Proficiency in building and maintaining batch and streaming ETL/ELT pipelines at scale, employing tools such as Airflow, Fivetran, Kafka, Iceberg, Parquet, Spark, Glue for developing end-to-end data orchestration leveraging on AWS services to ingest, transform and process large volumes of structured and unstructured data from diverse sources. More ❯
workshops including estimating, scoping and delivering customer proposals aligned with Analytics Solutions - Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Parquet, Iceberg, Hudi) - Experience developing software and data engineering code in one or more programming languages (Java, Python, PySpark, Node, etc) - AWS and other Data and AI aligned Certifications PREFERRED QUALIFICATIONS More ❯
About Us : LSEG (London Stock Exchange Group) is more than a diversified global financial markets infrastructure and data business. We are dedicated, open-access partners with a commitment to excellence in delivering the services our customers expect from us. With More ❯
Cardiff, Wales, United Kingdom Hybrid / WFH Options
Identify Solutions
Want to drive a top brand's Data team with 1m+ users? If you love building software in Python, implementing robust data pipelines & driving best practices, you may be interested in a Senior Engineer role I have with a highly More ❯
About Us : Role Description : As a Senior Lead within Engineering, you'll design and implement functionalities focusing on Data Engineering tasks. You'll work with semi-structured data to ingest and distribute it on a Microsoft Fabric-based platform, modernizing More ❯
engineers + external partners) across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering direction on software stack, best practices … especially AWS), and orchestration technologies Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and mentoring teams Worked in a … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join? Impactful Work – Help solve security problems that truly matter Ownership & Autonomy – Freedom to shape the stack and More ❯