in Kubernetes, your work will empower analysts, data scientists, and leadership with the insights they need, when they need them. If you're fluent in tools like Spark, Trino, Iceberg, and Python, and you thrive in high-security environments, this could be your next mission-critical opportunity. In This Role, You'll: • Design, build, and maintain secure, scalable data … pipelines and services • Ingest, transform, and model structured and unstructured data for analytics and ML • Work with technologies like Apache Spark, ApacheIceberg, Trino, NiFi, OpenSearch, and AWS EMR • Ensure data integrity, lineage, and security across the entire lifecycle • Collaborate with DevOps to deploy containerized data solutions using Kubernetes • Support Agile delivery, version control, and data governance More ❯
London, England, United Kingdom Hybrid / WFH Options
Experteer Italy
Expertise in data warehousing, data modelling, and data integration. * Experience in MLOps and machine learning pipelines. * Proficiency in SQL and data manipulation languages. * Experience with big data platforms (including Apache Arrow, Apache Spark, ApacheIceberg, and Clickhouse) and cloud-based infrastructure on AWS. Education & Qualifications * Bachelor's or Master's degree in Computer Science, Engineering, or More ❯
standards. Develop and deliver documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, ApacheIceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is protected More ❯
Built platforms that become reusable foundations for others Technical Capabilities 6+ years experience in data engineering or distributed systems Deep proficiency in Python and SQL Fluent in tools like Apache Spark , Beam , Flink , or Kafka Experience with cloud data warehouses (BigQuery, Snowflake) and orchestration tools (Airflow, Prefect, Dagster) Understanding of data lakehouse patterns (Iceberg, Delta Lake), vector stores More ❯
standards. Develop and deliver documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, ApacheIceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is protected More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
learning, and large-scale data processing. You'll be working closely with Engineering, Product, Security, and Compliance leadership to design a cloud-native data architecture using tools like Kafka, Iceberg, ClickHouse, Tinybird, and Snowflake. If building at scale, tackling complex streaming + batch challenges, and driving long-term architecture sounds exciting to you-this might be the right fit. … a growing Data Platform team Own architecture and technical vision from end to end Work on real-time and petabyte-scale data problems Use a modern data stack (Kafka, Iceberg, ClickHouse, Snowflake, etc.) Help shape the platform that will power AI/ML and analytics across the company What You'll Be Doing: Architect and build the company's … platform infrastructure roles At least 5 years operating at a Principal-level or equivalent Deep experience with Kafka and real-time data pipelines Solid hands-on work with ClickHouse, Iceberg, Snowflake, or similar tools Strong understanding of cloud-native architectures (AWS, GCP, or Azure) Experience building systems with compliance, governance, and scale in mind Proven ability to support petabyte More ❯
The role also involves optimizing database architecture and performance, implementing DevSecOps practices, and building CI/CD pipelines using Python, Bash, and Terraform. Preferred candidates will have experience with Apache Spark, Apache Nifi, data governance, and ETL standardization. Familiarity with Glue, Hive, and Iceberg or similar technologies is a plus. Tasks Performed: • Bridge communication between technical staff … data between systems, and optimize queries. • Plan and execute large-scale data migrations. • Improve database performance through architecture and tuning. • Create and maintain data flows using ETL tools like Apache Nifi. • Manage infrastructure as code using Python, Bash, and Terraform. • Integrate security into development and deployment workflows. • Build and support automated CI/CD pipelines. Education, Experience and Qualifications … SQL databases. • Demonstrated experience in large-scale data migration efforts. • Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar. • Demonstrated experience with Python, Bash, and Terraform. • Demonstrated experience with DevSecOps solutions and tools. • Demonstrated experience implementing CI/CD pipelines using industry standard process. • Demonstrated experience More ❯
Mandatory) Demonstrated experience in large-scale data migration efforts. (Mandatory) Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar Demonstrated experience with Python, Bash, and Terraform Demonstrated experience with DevSecOps solutions and tools Demonstrated experience implementing CI/CD pipelines using industry standard process Desired Skills … Demonstrated experience with Data Quality and Data Governance concepts and experience. Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. Demonstrated experience with Apache Spark More ❯
Demonstrated experience in large-scale data migration efforts. 5. (Mandatory) Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar 6. (Mandatory) Demonstrated experience with Python, Bash, and Terraform 7. (Mandatory) Demonstrated experience with DevSecOps solutions and tools 8. (Mandatory) Demonstrated experience implementing CI/CD pipelines … with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark More ❯
isn't just another data role-this is your chance to engineer solutions that truly matter. Key Responsibilities: Design, develop, and optimize scalable data pipelines using technologies such as Apache Spark, ApacheIceberg, Trino, OpenSearch, AWS EMR, NiFi, and Kubernetes containers. Ingest and move structured and unstructured data using approved methods into enterprise or local storage systems. … AWS, Kubernetes). Deep understanding of working with diverse data types and formats, including structured, semi-structured, and unstructured data. Familiarity with data ingestion tools and platforms such as Apache NiFi, Spark, and related open-source technologies. Demonstrated ability to collaborate across teams, including data scientists, software engineers, data stewards, and mission partners. Knowledge of data governance principles, metadata More ❯
Demonstrated experience in large-scale data migration efforts. 5. (Mandatory) Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar 6. (Mandatory) Demonstrated experience with Python, Bash, and Terraform 7. (Mandatory) Demonstrated experience with DevSecOps solutions and tools 8. (Mandatory) Demonstrated experience implementing CI/CD pipelines … with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark MUST be a US Citizen with a U.S. Government clearance - Intel with Polygraph NOTE: Must have More ❯
standard processes. TS/SCI with poly required to start Desired Experience Experience with the Sponsor's data environment and on-premises compute structure. Experience with Glue, Hive, and Iceberg or similar technologies. Experience with Terraform. Experience with DevSecOps solutions and tools. Experience with Data Quality and Data Governance concepts and experience. Experience maintaining, supporting, and improving the ETL … process using Apache NiFi or similar tools. Experience with Apache Spark. Equal Opportunity Employer/Veterans/Disabled Accommodations: If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to access job openings or apply for a job on this More ❯
environment. Proficiency in Spark/PySpark, Azure data technologies, Python or Scala, SQL. Experience with testing frameworks like pytest or ScalaTest. Knowledge of open table formats such as Delta, Iceberg, or Apache Hudi. Experience with CI/CD workflows using Azure DevOps, GitHub Actions, and version control systems like GIT. Understanding of cloud infrastructure and Infrastructure as Code … Scrum or Kanban. Nice to have skills: Experience in retail or e-commerce. Knowledge of Big Data and Distributed Computing. Familiarity with streaming technologies like Spark Structured Streaming or Apache Flink. Additional programming skills in PowerShell or Bash. Understanding of Databricks Ecosystem components. Experience with Data Observability or Data Quality frameworks. Additional Information What's in it for you More ❯
Demonstrated experience in large-scale data migration efforts. 5. (Mandatory) Demonstrated experience with database architecture, performance design methodologies, and system-tuning recommendations. Preference for familiarity with Glue, Hive, and Iceberg or similar 6. (Mandatory) Demonstrated experience with Python, Bash, and Terraform 7. (Mandatory) Demonstrated experience with DevSecOps solutions and tools 8. (Mandatory) Demonstrated experience implementing CI/CD pipelines … with Data Quality and Data Governance concepts and experience. 11. (Desired) Demonstrated experience maintaining, supporting, and improving the ETL process through the implementation and standardization of data flows with Apache Nifi and other ETL tools. 12. (Desired) Demonstrated experience with Apache Spark B4CORP Company Information B4Corp is a small defense contracting company that focuses on providing an optimum More ❯
test enterprise-level software solutions using tools and techniques such as BDD, Data Reconciliation, Source Control, TDD, Jenkins. Documenting configurations, processes, and best practices. Knowledge of file formats JSON, Iceberg, Avro. Basic knowledge of AWS technologies like IAM roles, Lakeformation, Security Groups, CloudFormation, Redshift. Big Data/Data Warehouse testing experience. Experience in the Financial services domain. Mentoring experience More ❯