across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … that could be preferable but open for this person to drive the Engineering practice Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles 🚀 Why Join Us? Impactful Work – Help solve security problems More ❯
london (city of london), south east england, United Kingdom
developrec
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … that could be preferable but open for this person to drive the Engineering practice Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles 🚀 Why Join Us? Impactful Work – Help solve security problems More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions-not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … that could be preferable but open for this person to drive the Engineering practice Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join Us? Impactful Work - Help solve security problems More ❯
grasp of data governance/data management concepts, including metadata management, master data management and data quality. Ideally, have experience with Data Lakehouse toolset (Iceberg) What you'll get in return Hybrid working (4 days per month in London HQ + as and when required) Access to market leading More ❯
building ETL pipelines using Python. Experience of SQL and relational databases. Experience with AWS or similar Cloud technology. Experience with S3, Kafka, Airflow, and Iceberg will be beneficial. Experience in the financial markets with a focus on securities & derivatives trading. Exceptional communication skills, attention to detail, and adaptability. More ❯
building ETL pipelines using Python. Experience of SQL and relational databases. Experience with AWS or similar Cloud technology. Experience with S3, Kafka, Airflow, and Iceberg will be beneficial. Experience in the financial markets with a focus on securities & derivatives trading. Exceptional communication skills, attention to detail, and adaptability. More ❯
building ETL pipelines using Python. Experience of SQL and relational databases. Experience with AWS or similar Cloud technology. Experience with S3, Kafka, Airflow, and Iceberg will be beneficial. Experience in the financial markets with a focus on securities & derivatives trading. Exceptional communication skills, attention to detail, and adaptability. More ❯
building ETL pipelines using Python. Experience of SQL and relational databases. Experience with AWS or similar Cloud technology. Experience with S3, Kafka, Airflow, and Iceberg will be beneficial. Experience in the financial markets with a focus on securities & derivatives trading. Exceptional communication skills, attention to detail, and adaptability. More ❯
Kafka, etc.). Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues. Experience with Databricks, Snowflake, Iceberg is required. Preferred qualifications, capabilities, and skills Understanding of application and data design disciplines with an emphasis on real-time processing and delivery, e.g. More ❯
Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work More ❯
cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data More ❯
cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Radley James
cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data More ❯
slough, south east england, United Kingdom Hybrid / WFH Options
Radley James
cloud-native environments (AWS preferred) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data More ❯
Data Engineer Language Skills (Python, SQL, Javascript) Experience of Azure, AWS or GCP cloud platforms and Data Lake/Warehousing Platforms such as Snowflake, Iceberg etc Experience of various ETL and Streaming Tools (Flink, Spark) Experience of a variety of data mining techniques (APIs, GraphQL, Website Scraping) Ability to More ❯
Data Engineer Language Skills (Python, SQL, Javascript) Experience of Azure, AWS or GCP cloud platforms and Data Lake/Warehousing Platforms such as Snowflake, Iceberg etc Experience of various ETL and Streaming Tools (Flink, Spark) Experience of a variety of data mining techniques (APIs, GraphQL, Website Scraping) Ability to More ❯
Data Engineer Language Skills (Python, SQL, Javascript) Experience of Azure, AWS or GCP cloud platforms and Data Lake/Warehousing Platforms such as Snowflake, Iceberg etc Experience of various ETL and Streaming Tools (Flink, Spark) Experience of a variety of data mining techniques (APIs, GraphQL, Website Scraping) Ability to More ❯
them into a multi step pipeline) About you: Strong Python Experience building complex data transformation pipelines Experience with Databricks at scale, preferably experience with Iceberg Experience with Airflow or dagster Experience with AWS & open source technologies on top of dataweave Desirable - Medical data exposure is useful but video/ More ❯
them into a multi step pipeline) About you: Strong Python Experience building complex data transformation pipelines Experience with Databricks at scale, preferably experience with Iceberg Experience with Airflow or dagster Experience with AWS & open source technologies on top of dataweave Desirable - Medical data exposure is useful but video/ More ❯
them into a multi step pipeline) About you: Strong Python Experience building complex data transformation pipelines Experience with Databricks at scale, preferably experience with Iceberg Experience with Airflow or dagster Experience with AWS & open source technologies on top of dataweave Desirable - Medical data exposure is useful but video/ More ❯
data. What you offer Experience with AWS cloud. Experience programming, debugging, and running production systems in Python. Exposure to open-source technologies such as Iceberg, Trino, and Airflow. Passionate about the use and adoption of these capabilities, focused on user experience and ensuring our business sees real value from More ❯
processes using infrastructure-as-code (Terraform) Build and maintain data pipelines using Airflow. Manage our tech stack including Python, Node.js, PostgreSQL, MongoDB, Kafka, and Apache Iceberg. Optimize infrastructure costs and develop strategies for efficient resource utilization. Provide critical support by monitoring services and resolving production issues. Contribute to the More ❯
analysis and automation. Proficiency in building and maintaining batch and streaming ETL/ELT pipelines at scale, employing tools such as Airflow, Fivetran, Kafka, Iceberg, Parquet, Spark, Glue for developing end-to-end data orchestration leveraging on AWS services to ingest, transform and process large volumes of structured and More ❯
using tools and techniques such as BDD, Data Reconciliation, Source Control, TDD, Jenkins. Documenting configurations, processes, and best practices. Knowledge of file formats JSON, Iceberg, Avro. Basic knowledge of AWS technologies like IAM roles, Lakeformation, Security Groups, CloudFormation, Redshift. Big Data/Data Warehouse testing experience. Experience in the More ❯
the future of the AI Data Cloud. Join the Snowflake team. Snowflake is seeking an accomplished Principal Sales Specialist, Data Engineering Platform (Openflow and Iceberg/Polaris) to drive sales execution for our ingestion workloads for the EMEA markets. Reporting to the Managing Direction, Data Engineering Platform Sales, this More ❯