Responsibilities: Design and develop scalable, high-performance data processing applications using Scala. Build and optimize ETL pipelines for handling large-scale datasets. Work with Apache Spark, Kafka, Flink, and other distributed data frameworks to process massive amounts of data. Develop and maintain data lake and warehouse solutions using technologies … like Databricks Delta Lake, ApacheIceberg, or Apache Hudi. Write clean, maintainable, and well-documented code. Optimize query performance, indexing strategies, and storage formats (JSON, Parquet, Avro, ORC). Implement real-time streaming solutions and event-driven architectures. Collaborate with data scientists, analysts, and DevOps engineers to … Master’s degree in Computer Science Engineering, or a related field. 10+ years of experience in software development with Scala Hands-on experience with Apache Spark (batch & streaming) using Scala. Experience in developing and maintaining data lake and warehouses using technologies like Databricks Delta Lake, ApacheIcebergMore ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join Us? Impactful Work – Help solve security problems More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join Us? Impactful Work – Help solve security problems More ❯
on complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … cloud deployments (especially AWS), and orchestration technologies Proven delivery of big data solutions—managing high-volume, complex data (structured/unstructured) Experience with Databricks, ApacheIceberg, or similar modern data platforms Experience building software environments from scratch, setting standards and best practices Experience leading and mentoring teams Startup More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join? Impactful Work – Help solve security problems that More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join? Impactful Work – Help solve security problems that More ❯
london (city of london), south east england, united kingdom
developrec
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and … startup/scaleup background and someone that is adaptable Tech Stack Snapshot Languages: Python Cloud: AWS preferred, cloud-agnostic approach encouraged Data: SQL, Databricks, Iceberg, Kubernetes, large-scale data pipelines CI/CD & Ops: Open source tools, modern DevOps principles Why Join? Impactful Work – Help solve security problems that More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … cloud deployments (especially AWS), and orchestration technologies Proven delivery of big data solutions managing high-volume, complex data (structured/unstructured) Experience with Databricks, ApacheIceberg, or similar modern data platforms Experience building software environments from scratch, setting best practices and standards Experience leading and mentoring teams Startup More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … deployments (especially AWS), and orchestration technologies Proven delivery of big data solutions—managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practices and standards Experience leading and More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and More ❯
across complex data and cloud engineering projects Designing and delivering distributed solutions on an AWS-centric stack, with open-source flexibility Working with Databricks, ApacheIceberg, and Kubernetes in a cloud-agnostic environment Guiding architecture and implementation of large-scale data pipelines for structured and unstructured data Steering … Proven delivery of big data solutions—not necessarily at FAANG scale, but managing high-volume, complex data (structured/unstructured) Experience working with Databricks, ApacheIceberg, or similar modern data platforms Experience of building software environments from the ground up, setting best practice and standards Experience leading and More ❯
not necessary) Agile The following is DESIRABLE, not essential: AWS or GCP Buy-side Data tools such as Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio Fixed Income performance, risk or attribution TypeScript and Node Role: Python Developer (Software Engineer Programmer Developer Python Fixed Income JavaScript Node Fixed Income … times a week. The tech environment is very new and will soon likely include exposure to the following: Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio This is an environment that has been described as the only corporate environment with a start-up/fintech attitude towards technology. Hours More ❯
implementing REST API Familiarity with AWS Preferred experience/skills An understanding of the asset management business and/or financial markets Experience with Iceberg, Snowflake is a bonus Experience of gRPC Experience of working in a Scrum team Commitment to Diversity, Equity, and Inclusion: At T. Rowe Price More ❯
Hive, Redshift, Kafka, etc. Experience in data quality testing; capable of writing test cases and scripts, and resolving data issues. Experience with Databricks, Snowflake, Iceberg is required. Preferred qualifications, capabilities, and skills Understanding of application and data design disciplines, especially real-time processing with Kafka. Knowledge of the Commercial More ❯
stakeholders at all levels, provide training, and solicit feedback. Preferred qualifications, capabilities, and skills Experience with big-data technologies, such as Splunk, Trino, and Apache Iceberg. Data Science experience. AI/ML experience with building models. AWS certification (e.g., AWS Certified Solutions Architect, AWS Certified Developer). #J More ❯
grasp of data governance/data management concepts, including metadata management, master data management and data quality. Ideally, have experience with Data Lakehouse toolset (Iceberg) What you'll get in return Hybrid working (4 days per month in London HQ + as and when required) Access to market leading More ❯
Full-time Job function Job function Engineering and Information Technology Industries Banking, Investment Banking, and Information Services Referrals increase your chances of interviewing at Iceberg by 2x Sign in to set job alerts for “Software Engineer” roles. Backend Junior Software Engineer - Remote 4 days a week (Europe) Manchester, England More ❯
Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work More ❯
intellectually curious, and team-oriented. Strong communication skills. Experience with options trading or options data is a strong plus. Experience with technologies like KDB, ApacheIceberg, and Lake Formation will be a meaningful differentiator. #J-18808-Ljbffr More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data More ❯
Data Engineer Language Skills (Python, SQL, Javascript) Experience of Azure, AWS or GCP cloud platforms and Data Lake/Warehousing Platforms such as Snowflake, Iceberg etc Experience of various ETL and Streaming Tools (Flink, Spark) Experience of a variety of data mining techniques (APIs, GraphQL, Website Scraping) Ability to More ❯