continue to scale their data infrastructure , they're seeking a seasoned Principal Data Engineer to help design, build, and optimise a modern data platform, with a focus on orchestration (Airflow), scalable data warehouse architecture, and high-performance pipelines. The ideal candidate will bring both technical depth and strategic thinking , with the ability to communicate effectively across business and technical … warehouse solutions for BI and analytics. Define and drive the long-term architecture and data strategy in alignment with business goals. Own orchestration of ETL/ELT workflows using ApacheAirflow , including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices … Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with ApacheAirflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and metadata management. Experience with cloud More ❯
with clients - Collaborating with cross-functional teams to deploy and operate solutions in production - Supporting real-time and near-real-time data analytics initiatives - Leveraging orchestration tools such as Airflow, Dagster, Azure Data Factory or Fivetran Required qualifications to be successful in this role - Solid experience designing and delivering Snowflake-based data warehouse solutions - Strong background performing architectural assessments … Python, Java or Scala - Hands-on experience using DBT for pipeline development and transformation - Familiarity with cloud platforms such as AWS, Azure or GCP - Knowledge of orchestration tooling (e.g., Airflow, Dagster, Azure Data Factory, Fivetran) Desirable: - Experience deploying AI/ML models in production environments - Familiarity with AWS data services (e.g., S3, Glue, Kinesis, Athena) - Exposure to real-time More ❯
continue to scale their data infrastructure , they're seeking a seasoned Principal Data Engineer to help design, build, and optimise a modern data platform, with a focus on orchestration (Airflow), scalable data warehouse architecture, and high-performance pipelines. The ideal candidate will bring both technical depth and strategic thinking , with the ability to communicate effectively across business and technical … warehouse solutions for BI and analytics. Define and drive the long-term architecture and data strategy in alignment with business goals. Own orchestration of ETL/ELT workflows using ApacheAirflow , including scheduling, monitoring, and alerting. Collaborate with cross-functional teams (Product, Engineering, Data Science, Compliance) to define data requirements and build reliable data flows. Champion best practices … Proven experience designing and delivering enterprise data strategies . Exceptional communication and stakeholder management skills. Expertise in enterprise-grade data warehouses (Snowflake, BigQuery, Redshift). Hands-on experience with ApacheAirflow (or similar orchestration tools). Strong proficiency in Python and SQL for pipeline development. Deep understanding of data architecture, dimensional modelling, and metadata management. Experience with cloud More ❯
Experience in using modern data architectures, such as lakehouse. Experience with CI/CD pipelines and version control systems like Git. Knowledge of ETL tools and technologies such as ApacheAirflow, Informatica, or Talend. Knowledge of data governance and best practices in data management. Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying … and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) Apache Spark (for distributed data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or GCP More ❯
data architectures, such as lakehouse. Experience with CI/CD pipelines, version control systems like Git, and containerization (e.g., Docker). Experience with ETL tools and technologies such as ApacheAirflow, Informatica, or Talend. Strong understanding of data governance and best practices in data management. Experience with cloud platforms and services such as AWS, Azure, or GCP for … deploying and managing data solutions. Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. SQL (for database management and querying) Apache Spark (for distributed data processing) Apache Spark Streaming, Kafka or similar (for real-time data streaming) Experience using data tools in at least one cloud service - AWS, Azure or More ❯
Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
Azure Storage, Medallion Architecture, and working with data formats such as JSON, CSV, and Parquet. • Strong understanding of IT concepts, including security, IAM, Key Vault, and networking. • Exposure to ApacheAirflow and DBT is a bonus. • Familiarity with agile principles and practices. • Experience with Azure DevOps pipelines. The "Nice to Haves": • Certification in Azure or related technologies. • Experience More ❯
building production data pipelines Advanced Python skills (NumPy, Pandas, SQL Alchemy) and expert-level SQL across multiple database platforms Hands-on experience with modern data stack tools including dbt, Airflow, and cloud data warehouses (Snowflake, BigQuery, Redshift) Strong understanding of data modelling, schema design, and building maintainable ELT/ETL pipelines Experience with cloud platforms (AWS, Azure, GCP) and More ❯
systems, with a focus on data quality and reliability. Design and manage data storage solutions, including databases, warehouses, and lakes. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads. Operations & Tooling Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency. Implement data governance, access controls, and … ELT pipelines and data architectures. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch. Strong understanding of data governance, security, and best practices for data quality. Effective communicator with the ability to work across technical and non-technical teams. … Additional Strengths Experience with orchestration tools like Apache Airflow. Knowledge of real-time data processing and event-driven architectures. Familiarity with observability tools and anomaly detection for production systems. Exposure to data visualization platforms such as Tableau or Looker. Relevant cloud or data engineering certifications. What we offer: A collaborative and transparent company culture founded on Integrity, Innovation and More ❯
Deep understanding in software architecture, object-oriented design principles, and data structures Extensive experience in developing microservices using Java, Python Experience in distributed computing frameworks like - Hive/Hadoop, Apache Spark. Good experience in Test driven development and automating test cases using Java/Python Experience in SQL/NoSQL (Oracle, Cassandra) database design Demonstrated ability to be proactive … HR related applications Experience with following cloud services: AWS Elastic Beanstalk, EC2, S3, CloudFront, RDS, DynamoDB, VPC, Elastic Cache, Lambda Working experience with Terraform Experience in creating workflows for ApacheAirflow About Roku Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and More ❯
Snowflake Expertise with common Software Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages Experience with workflow orchestration tools such as Airflow Detailed problem-solving approach, coupled with a strong sense of ownership and drive A passionate bias to action and passion for delivering high-quality data solutions Deep understanding of … certification/s Strong data visualizations skills to convey information and results clearly Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc. Experience with event messaging frameworks like Apache Kafka The hiring range for this position in Santa Monica, California is $136,038 to $182,490 per year, in Glendale, California is $136,038 to $182,490 per More ❯
or MS degree in Computer Science or equivalent Experience in developing Finance or HR related applications Working experience with Tableau Working experience with Terraform Experience in creating workflows for ApacheAirflow and Jenkins Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our More ❯
qualifications: Bachelor's Degree in an analytical field such as computer science, statistics, finance, economics or relevant area. Entry level experience of Hadoop ecosystem and associated technologies, (For e.g. Apache Spark, MLlib, GraphX, iPython, sci-kit,Pandas etc.) Working knowledge in writing and optimizing efficient SQL queries with Python, Hive, Scala handling Large Data Sets in Big-Data Environments. More ❯
retrieval and pipeline development Experience with IaC tools such as Terraform or Ansible for deployment and infrastructure management Hands-on experience with; ETL/ELT orchestration and pipeline tools (Airflow, Airbyte, DBT, etc.) Data warehousing tools and platforms (Snowflake, Iceberg, etc.) SQL databases, particularly MySQL Desired Experience: Experience with cloud-based services, particularly AWS Proven ability to manage stakeholders More ❯
with multiple languages • Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) • Experience in working with process scheduling platforms like Apache Airflow. • Should be ready to work in GS proprietary technology like Slang/SECDB • An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory More ❯
Cloud Data Warehouse - Snowflake AWS Data Solutions - Kinesis, SNS, SQS, S3, ECS, Lambda Data Governance & Quality - Collate & Monte Carlo Infrastructure as Code - Terraform Data Integration & Transformation - Python, DBT, Fivetran, Airflow CI/CD - Github Actions/Jenkins Nice to Have Experience Understanding of various data architecture paradigms (e.g., Data Lakehouse, Data Warehouse, Data Mesh) and their applicability to different More ❯
Cloud Data Warehouse - Snowflake AWS Data Solutions - Kinesis, SNS, SQS, S3, ECS, Lambda Data Governance & Quality - Collate & Monte Carlo Infrastructure as Code - Terraform Data Integration & Transformation - Python, DBT, Fivetran, Airflow CI/CD - Github Actions/Jenkins Business Intelligence - Looker Skills & Attributes We'd Like To See: Extensive experience in data engineering, including designing and maintaining robust data pipelines. More ❯
Azure). Familiarity with version control (Git) and modern deployment workflows Excellent problem-solving skills and ability to resolve data issues quickly and effectively Experience with data pipeline orchestration (Airflow or similar is a plus) Nice-to-have experience: Exposure to other parts of the modern data stack (e.g., Fivetran, dbt Metrics Layer, Tableau) Experience in CI/CD More ❯
in the office in average. In this role you will be working within a new, greenfield division of the business, using a brand-new technology stack including Snowflake, dbt, Airflow and AWS. This function provide data for Machine Learning and Artificial Intelligence capabilities, helping them to provide the best possible service offering to their customers. You'll work on … a strong financial backing, and the chance to make a real impact! We're looking for the following experience: Extensive hands-on experience with Snowflake Extensive experience with dbt, Airflow, AWS and Terraform Exellent scripting skills in SQL Experience developing solutions entirely from scratch Great communication skills, with the ability to understand and translate complex requirements into technical solutions More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
in the office in average. In this role you will be working within a new, greenfield division of the business, using a brand-new technology stack including Snowflake, dbt, Airflow and AWS. This function provide data for Machine Learning and Artificial Intelligence capabilities, helping them to provide the best possible service offering to their customers. You'll work on … a strong financial backing, and the chance to make a real impact! We're looking for the following experience: Extensive hands-on experience with Snowflake Extensive experience with dbt, Airflow, AWS and Terraform Exellent scripting skills in SQL Experience developing solutions entirely from scratch Great communication skills, with the ability to understand and translate complex requirements into technical solutions More ❯
TransferGo. Our products are recognised by industry leaders like Gartner's Magic Quadrant, Forrester Wave and Frost Radar. Our tech stack: Superset and similar data visualisation tools. ETL tools: Airflow, DBT, Airbyte, Flink, etc. Data warehousing and storage solutions: ClickHouse, Trino, S3. AWS Cloud, Kubernetes, Helm. Relevant programming languages for data engineering tasks: SQL, Python, Java, etc. What you … methodologies. Collaborating with stakeholders to define data strategies, implement data governance policies, and ensure data security and compliance. About you: Strong technical proficiency in data engineering technologies, such as ApacheAirflow, ClickHouse, ETL tools, and SQL databases. Deep understanding of data modeling, ETL processes, data integration, and data warehousing concepts. Proficiency in programming languages commonly used in data More ❯
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
Computer Science, Engineering, or a related field, or equivalent industry experience. Preferred Qualifications Experience or interest in mentoring junior engineers. Familiarity with data-centric workflows and pipeline orchestration (e.g., ApacheAirflow). Proficiency in data validation, anomaly detection, or debugging using tools like Pandas, Polars, or data.table/R. Experience working with AWS or other cloud platforms. Knowledge More ❯
with interface/API data modeling. Knowledge of CI/CD tools like GitHub Actions or similar. AWS certifications such as AWS Certified Data Engineer. Knowledge of Snowflake, SQL, ApacheAirflow, and DBT. Familiarity with Atlan for data cataloging and metadata management. Understanding of iceberg tables. Who we are: We're a global business empowering local teams with More ❯
infrastructure. You will have good software development experience with Python coupled with strong SQL skills. In addition, you will also have a strong desire to work with Docker, Kubernetes, Airflow and the AWS data technologies such as Athena, Redshift, EMR and various other tools in the AWS ecosystem. You would be joining a team of 25+ engineers across mobile … skills Familiarity with continuous integration, unit testing tools and related practices Understanding of Agile Scrum software development lifecycle What you'll be doing: Implementing and maintaining ETL pipelines using Airflow & AWS technologies Contributing to data-driven tools owned by the data engineering team, including content personalisation Responsibility of ingestion framework and processes Helping monitor and look after our data More ❯
Maintain and develop data warehouses Provide guidance and mentorship to junior team members To be successful in this role you will have. Extensive experience with Snowflake Experience with DBT, Airflow or Python Cloud Data Engineering experience with AWS/Azure/GCP This is a hybrid role based from the companies London office with some of the benefits including More ❯