Apache Iceberg Jobs

23 of 23 Apache Iceberg Jobs

Data Architect (Trading) (London)

London, UK
Hybrid / WFH Options
Keyrock
Expertise in data warehousing, data modelling, and data integration. Experience in MLOps and machine learning pipelines. Proficiency in SQL and data manipulation languages. Experience with big data platforms (including Apache Arrow, Apache Spark, Apache Iceberg, and Clickhouse) and cloud-based infrastructure on AWS. Education & Qualifications Bachelors or Masters degree in Computer Science, Engineering, or a related More ❯
Employment Type: Full-time
Posted:

Senior Data Engineer

Leeds, West Yorkshire, Yorkshire, United Kingdom
The Bridge (IT Recruitment) Limited
able to work across full data cycle. • Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD • Coding experience in Apache Spark, Iceberg or Python (Pandas) • Experience in change and release management. • Experience in Database Warehouse design and data modelling • Experience managing Data Migration projects. • Cloud data platform development … the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB • Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) • Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge More ❯
Employment Type: Permanent
Salary: £65,000
Posted:

Python Developer Software Engineer AWS Finance Trading London

London, United Kingdom
Hybrid / WFH Options
Joseph Harry Ltd
React or Angular good but not necessary) Agile The following is DESIRABLE, not essential: AWS or GCP Buy-side Data tools such as Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio Fixed Income performance, risk or attribution TypeScript and Node Role: Python Developer (Software Engineer Programmer Developer Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS Vue … in the office 1-2 times a week. The tech environment is very new and will soon likely include exposure to the following: Glue, Athena, Airflow, Ignite, DBT, Arrow, Iceberg, Dremio This is an environment that has been described as the only corporate environment with a start-up/fintech attitude towards technology. Hours are 9-5. Salary More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Head of Data Engineering

London, United Kingdom
Hybrid / WFH Options
Zego
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Head of Data Engineering (London)

London, UK
Hybrid / WFH Options
Zego
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
Employment Type: Full-time
Posted:

Senior Software Engineer, Python (London)

London, UK
Hybrid / WFH Options
YouGov
MySQL Exposure to Docker, Kubernetes, AWS, Helm, Terraform, Vault, Grafana, ELK Stack, New Relic Relevant experience in the maintenance of data APIs and data lake architectures, including experience with Apache Iceberg, Trino/Presto, Clickhouse, Snowflake, BigQuery. Master's degree in Computer Science or Engineering-related field #LI-PM1 Get to know us better... YouGov is a global More ❯
Employment Type: Full-time
Posted:

Senior Software Engineer, Data Platform

Manhattan, New York, United States
Selby Jennings
Processing Technologies: Familiarity with a range of technologies such as Flink, Spark, Polars, Dask, etc. Data Storage Solutions: Knowledge of various storage technologies, including S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc. Data Formats and Serialization: Experience with multiple data formats and serialization systems like Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc. ETL Pipelines: Proven More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineer (AWS) with Security Clearance

Columbia, South Carolina, United States
Hybrid / WFH Options
Systemtec Inc
technologies and cloud-based technologies AWS Services, State Machines, CDK, Glue, TypeScript, CloudWatch, Lambda, CloudFormation, S3, Glacier Archival Storage, DataSync, Lake Formation, AppFlow, RDS PostgreSQL, Aurora, Athena, Amazon MSK, Apache Iceberg, Spark, Python ONSITE: Partially onsite 3 days per week (Tue, Wed, Thurs) and as needed. Standard work hours: 8:30 AM - 5:00 PM Required Qualifications of More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineering Manager

London, South East, England, United Kingdom
Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
modern data tooling Introduce and advocate for scalable, efficient data processes and platform enhancements Tech Environment: Python, SQL, Spark, Airflow, dbt, Snowflake, Postgres AWS (S3), Docker, Terraform Exposure to Apache Iceberg, streaming tools (Kafka, Kinesis), and ML pipelines is a bonus What We're Looking For: 5+ years in Data Engineering, including 2+ years in a leadership or More ❯
Employment Type: Full-Time
Salary: £85,000 - £95,000 per annum
Posted:

Senior Software Engineer, Exposure

London, United Kingdom
Chainalysis Inc
Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the world are using More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Software Engineer, Exposure (London)

Highbury, Greater London, UK
Chainalysis Inc
Terraform and Kubernetes is a plus! A genuine excitement for significantly scaling large data systems Technologies we use (experience not required): AWS serverless architectures Kubernetes Spark Flink Databricks Parquet. Iceberg, Delta lake, Paimon Terraform Github including Github Actions Java PostgreSQL About Chainalysis Blockchain technology is powering a growing wave of innovation. Businesses and governments around the world are using More ❯
Employment Type: Full-time
Posted:

Data Engineer with Security Clearance

Manassas, Virginia, United States
The Arena
ships, aircraft). Required Experience: Active Secret clearance. 4-7 years in data engineering, preferably within secure or classified environments. Proficiency in Python, Spark, SQL, and orchestration tools like Apache Airflow. Hands-on experience with data serialization formats such as protobuf, Arrow, FlatBuffers, or Cap'n Proto. Familiarity with data storage formats like Parquet or Avro. Experience with modern … analytic storage technologies such as Apache Iceberg or DuckDB. Binary message parsing experience. Strong understanding of classified data handling, secure networking, and compliance in high-side or air-gapped environments. Preferred Experience: Familiarity with IC standards (UDS, IC ITE) and secure cloud environments (e.g., AWS GovCloud, C2S). Experience deploying LLMs or machine learning models within classified network More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Engineering Principal (1 Braham Street, London, United Kingdom) (London)

London, UK
BT Group
AWS services such as S3, Glue, Lambda, Redshift, EMR, Kinesis, and morecovering data pipelines, warehousing, and lakehouse architectures. Drive the migration of legacy data workflows to Lakehouse architectures, leveraging Apache Iceberg to enable unified analytics and scalable data management. Operate as a subject matter expert across multiple data projects, providing strategic guidance on best practices in design, development … in designing and implementing scalable data engineering solutions. Bring extensive experience in software architecture and solution design, ensuring robust and future-proof systems. Hold specialised proficiency in Python and Apache Spark, enabling efficient processing of large-scale data workloads. Demonstrate the ability to set technical direction, uphold high standards for code quality, and optimise performance in data-intensive environments. … of continuous learning and innovation. Extensive background in software architecture and solution design, with deep expertise in microservices, distributed systems, and cloud-native architectures. Advanced proficiency in Python and Apache Spark, with a strong focus on ETL data processing and scalable data engineering workflows. In-depth technical knowledge of AWS data services, with hands-on experience implementing data pipelines More ❯
Employment Type: Part-time
Posted:

Head of Data Engineering

London, South East, England, United Kingdom
Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
options Hybrid working - 1 day a week in a central London office High-growth scale-up with a strong mission and serious funding Modern tech stack: Python, SQL, Snowflake, Apache Iceberg, AWS, Airflow, dbt, Spark Work cross-functionally with engineering, product, analytics, and data science leaders What You'll Be Doing Lead, mentor, and grow a high-impact More ❯
Employment Type: Full-Time
Salary: £100,000 - £120,000 per annum
Posted:

Senior Data Engineer, Data Platform

London, United Kingdom
Macquarie Bank Limited
and scalable environments for our data platforms. Leveraging cloud-native technologies and AWS tools such as AWS S3, EKS, Glue, Airflow, Trino, and Parquet, you will prepare to adopt Apache Iceberg for greater performance and flexibility. You'll address high-performance data workloads, ensuring seamless execution of massive queries, including 600+ billion-row queries in Redshift, by designing More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Senior Data Architect

South East, United Kingdom
Hybrid / WFH Options
Anson Mccade
DBT, and data governance frameworks Preferred: Certifications in cloud/data technologies Experience with API/interface modelling and CI/CD (e.g. GitHub Actions) Knowledge of Atlan and iceberg tables Reference: AMC/SCU/SDA/3007 Postcode: SW1 #secu More ❯
Employment Type: Permanent
Posted:

Systems Engineer with Security Clearance

Mc Lean, Virginia, United States
Country Intelligence Group
The role also involves optimizing database architecture and performance, implementing DevSecOps practices, and building CI/CD pipelines using Python, Bash, and Terraform. Preferred candidates will have experience with Apache Spark, Apache Nifi, data governance, and ETL standardization. Familiarity with Glue, Hive, and Iceberg or similar technologies is a plus. Tasks Performed: • Bridge communication between technical staff … data between systems, and optimize queries. • Plan and execute large-scale data migrations. • Improve database performance through architecture and tuning. • Create and maintain data flows using ETL tools like Apache Nifi. • Manage infrastructure as code using Python, Bash, and Terraform. • Integrate security into development and deployment workflows. • Build and support automated CI/CD pipelines. Education, Experience and Qualifications … SQL databases. • Demonstrated experience in large-scale data migration efforts. • Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar. • Demonstrated experience with Python, Bash, and Terraform. • Demonstrated experience with DevSecOps solutions and tools. • Demonstrated experience implementing CI/CD pipelines using industry standard process. • Demonstrated experience More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Data Architect

South East, United Kingdom
Hybrid / WFH Options
Anson Mccade
of real-time and analytical data pipelines, metadata, and cataloguing (e.g., Atlan) Strong communication, stakeholder management, and documentation skills Preferred (but not essential): AWS or Snowflake certifications Knowledge of Apache Airflow, DBT, GitHub Actions Experience with Iceberg tables and data product thinking Why Apply? Work on high-impact, high-scale client projects Join a technically elite team with More ❯
Employment Type: Permanent, Work From Home
Posted:

Data Architect (London)

London, UK
Women in Data
developing and implementing enterprise data models. Experience with Interface/API data modelling. Experience with CI/CD GITHUB Actions (or similar) Knowledge of Snowflake/SQL Knowledge of Apache Airflow Knowledge of DBT Familiarity with Atlan for data catalog and metadata management Understanding of iceberg tables Who we are: Were a business with a global reach that More ❯
Employment Type: Full-time
Posted:

Infrastructure/ Platform Engineer Apache

London, United Kingdom
Experis - ManpowerGroup
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … for logs, metrics, and error handling to support monitoring and incident response. ? Align implementations with InfoSum's privacy, security, and compliance practices. Required Skills and Experience: ? Proven experience with Apache Spark (Scala, Java, or PySpark), including performance optimization and advanced tuning techniques. ? Strong troubleshooting skills in production Spark environments, including diagnosing memory usage, shuffles, skew, and executor behavior. ? Experience … cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD pipelines. ? Strong documentation and communication skills, with the More ❯
Employment Type: Permanent
Salary: GBP Annual
Posted:

Infrastructure/ Platform Engineer Apache

London, United Kingdom
Experis
Role Title: Infrastructure/Platform Engineer - Apache Duration: 9 Months Location: Remote Rate: £ - Umbrella only Would you like to join a global leader in consulting, technology services and digital transformation? Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms. Role purpose/summary ? Refactor … for logs, metrics, and error handling to support monitoring and incident response. ? Align implementations with InfoSum's privacy, security, and compliance practices. Required Skills and Experience: ? Proven experience with Apache Spark (Scala, Java, or PySpark), including performance optimization and advanced tuning techniques. ? Strong troubleshooting skills in production Spark environments, including diagnosing memory usage, shuffles, skew, and executor behavior. ? Experience … cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD pipelines. ? Strong documentation and communication skills, with the More ❯
Employment Type: Contract
Posted:

Senior Data Engineer

Chicago, Illinois, United States
Selby Jennings
At least 3 years of hands-on experience with Kafka, including stream processing and cluster management 2+ years working with large-scale data storage solutions (e.g., S3, HDFS, Databricks, Iceberg) Proficiency with distributed data processing tools like Apache Spark or Flink Strong programming background in Java, Python, and SQL Familiarity with Python-based data science libraries and toolkits More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:

Starburst Consultant with Security Clearance

Washington, Washington DC, United States
Sterling Computers
multiple heterogenous data sources. • Good knowledge of warehousing and ETLs. Extensive knowledge of popular database providers such as SQL Server, PostgreSQL, Teradata and others. • Proficiency in technologies in the Apache Hadoop ecosystem, especially Hive, Impala and Ranger • Experience working with open file and table formats such Parquet, AVRO, ORC, Iceberg and Delta Lake • Extensive knowledge of automation and More ❯
Employment Type: Permanent
Salary: USD Annual
Posted:
Apache Iceberg
10th Percentile
£100,000
25th Percentile
£105,000
Median
£130,000
75th Percentile
£137,500