robust way possible! Diverse training opportunities and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, Apache Hadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes Strong More ❯
production issues. Optimize applications for performance and responsiveness. Stay Up to Date with Technology: Keep yourself and the team updated on the latest Python technologies, frameworks, and tools like Apache Spark, Databricks, Apache Pulsar, Apache Airflow, Temporal, and Apache Flink, sharing knowledge and suggesting improvements. Documentation: Contribute to clear and concise documentation for software, processes, and … Experience with cloud platforms like AWS, GCP, or Azure. DevOps Tools: Familiarity with containerization (Docker) and infrastructure automation tools like Terraform or Ansible. Real-time Data Streaming: Experience with Apache Pulsar or similar systems for real-time messaging and stream processing is a plus. Data Engineering: Experience with Apache Spark, Databricks, or similar big data platforms for processing … large datasets, building data pipelines, and machine learning workflows. Workflow Orchestration: Familiarity with tools like Apache Airflow or Temporal for managing workflows and scheduling jobs in distributed systems. Stream Processing: Experience with Apache Flink or other stream processing frameworks is a plus. Desired Skills Asynchronous Programming: Familiarity with asynchronous programming tools like Celery or asyncio. Frontend Knowledge: Exposure More ❯
data cataloging and metadata management using tools like AWS Glue Data Catalog. Demonstrated self-sufficiency in exploring new tools, troubleshooting issues, and continuously improving processes. Hands-on experience with Apache Airflow for orchestrating complex data workflows and ensuring reliable execution. Understanding of cloud security and governance practices including IAM, KMS, and data access policies. Experience with monitoring and observability More ❯
data cataloging and metadata management using tools like AWS Glue Data Catalog. Demonstrated self-sufficiency in exploring new tools, troubleshooting issues, and continuously improving processes. Hands-on experience with Apache Airflow for orchestrating complex data workflows and ensuring reliable execution. Understanding of cloud security and governance practices including IAM, KMS, and data access policies. Experience with monitoring and observability More ❯
data cataloging and metadata management using tools like AWS Glue Data Catalog. Demonstrated self-sufficiency in exploring new tools, troubleshooting issues, and continuously improving processes. Hands-on experience with Apache Airflow for orchestrating complex data workflows and ensuring reliable execution. Understanding of cloud security and governance practices including IAM, KMS, and data access policies. Experience with monitoring and observability More ❯
data cataloging and metadata management using tools like AWS Glue Data Catalog. Demonstrated self-sufficiency in exploring new tools, troubleshooting issues, and continuously improving processes. Hands-on experience with Apache Airflow for orchestrating complex data workflows and ensuring reliable execution. Understanding of cloud security and governance practices including IAM, KMS, and data access policies. Experience with monitoring and observability More ❯
london (city of london), south east england, united kingdom
HCLTech
data cataloging and metadata management using tools like AWS Glue Data Catalog. Demonstrated self-sufficiency in exploring new tools, troubleshooting issues, and continuously improving processes. Hands-on experience with Apache Airflow for orchestrating complex data workflows and ensuring reliable execution. Understanding of cloud security and governance practices including IAM, KMS, and data access policies. Experience with monitoring and observability More ❯
in AWS. Strong expertise with AWS services, including Glue, Redshift, Data Catalog, and large-scale data storage solutions such as data lakes. Proficiency in ETL/ELT tools (e.g. Apache Spark, Airflow, dbt). Skilled in data processing languages such as Python, Java, and SQL. Strong knowledge of data warehousing, data lakes, and data lakehouse architectures. Excellent analytical and More ❯
/medical devices preferred but not required) Strong Python programming and data engineering skills (Pandas, PySpark, Dask) Proficiency with databases (SQL/NoSQL), ETL processes, and modern data frameworks (Apache Spark, Airflow, Kafka) Solid experience with cloud platforms (AWS, GCP, or Azure) and CI/CD for data pipelines Understanding of data privacy and healthcare compliance (GDPR, HIPAA, ISO More ❯
data quality, or other areas directly relevant to data engineering responsibilities and tasks Proven project experience developing and maintaining data warehouses in big data solutions (Snowflake) Expert knowledge in Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines Ability to design, build, and deploy data solutions that capture, explore, transform, and utilize data More ❯
the first time, Dev10 equips you with the tools to succeed. You'll train in technologies including Relational and Non-Relational Databases, APIs, Python, Pandas, Excel, Dash, Kafka, Airflow, Apache Spark, and Machine Learning. Now hiring for our Data Engineer cohort starting Monday, December 1, 2025. To apply, please submit a formal application through our website: Dev10 Apply Here More ❯
the first time, Dev10 equips you with the tools to succeed. You'll train in technologies including Relational and Non-Relational Databases, APIs, Python, Pandas, Excel, Dash, Kafka, Airflow, Apache Spark, and Machine Learning. Now hiring for our Data Engineer cohort starting Monday, December 1, 2025. To apply, please submit a formal application through our website: Dev10 Apply Here More ❯
the first time, Dev10 equips you with the tools to succeed. You'll train in technologies including Relational and Non-Relational Databases, APIs, Python, Pandas, Excel, Dash, Kafka, Airflow, Apache Spark, and Machine Learning. Now hiring for our Data Engineer cohort starting Monday, December 1, 2025. To apply, please submit a formal application through our website: Dev10 Apply Here More ❯
the first time, Dev10 equips you with the tools to succeed. You'll train in technologies including Relational and Non-Relational Databases, APIs, Python, Pandas, Excel, Dash, Kafka, Airflow, Apache Spark, and Machine Learning. Now hiring for our Data Engineer cohort starting Monday, December 1, 2025. To apply, please submit a formal application through our website: Dev10 Apply Here More ❯
Linux-based, concurrent, high-throughput, low-latency software systems Experience with pipeline orchestration frameworks (e.g. Airflow, Dagster) Experience with streaming platforms (e.g. Kafka), data lake platforms (e.g. Delta Lake, Apache Iceberg), and relational databases Have a Bachelors or advanced degree in Computer Science, Mathematics, Statistics, Physics, Engineering, or equivalent work experience For more information about DRW's processing activities More ❯
MySQL Exposure to Docker, Kubernetes, AWS, Helm, Terraform, Vault, Grafana, ELK Stack, New Relic Relevant experience in the maintenance of data APIs and data lake architectures, including experience with Apache Iceberg, Trino/Presto, Clickhouse, Snowflake, BigQuery. Master's degree in Computer Science or Engineering-related field Get to know us better YouGov is a global online research company More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Rackner
in Kubernetes (AWS EKS, Rancher) with CI/CD Apply DevSecOps + security-first practices from design to delivery Tech You'll Touch AWS Python FastAPI Node.js React Terraform Apache Airflow Trino Spark Hadoop Kubernetes You Have Active Secret Clearance 3+ years in Agile, cloud-based data engineering Experience with API design, ORM + SQL, AWS data services Bonus More ❯
Basingstoke, Hampshire, South East, United Kingdom
Anson Mccade
processes. Monitor integration health and implement alerting, logging, and performance tracking. Contribute to continuous improvement of integration architecture and practices. Key Skills & Experience Experience with workflow orchestration tools (e.g., Apache Airflow). Proven backend development skills using Node.js, Python, Java, or similar. Strong understanding of API design and integration techniques (REST, Webhooks, GraphQL). Familiarity with authentication protocols (OAuth2 More ❯
Salary: 50.000 - 60.000 € per year Requirements: • 3+ years of hands-on experience as a Data Engineer working with Databricks and Apache Spark • Strong programming skills in Python, with experience in data manipulation libraries (e.g., PySpark, Spark SQL) • Experience with core components of the Databricks ecosystem: Databricks Workflows, Unity Catalog, and Delta Live Tables • Solid understanding of data warehousing principles More ❯
problem solving skills We'd love to see: Knowledge of Database Systems and trade offs in the distributed systems Familiarity with API Designs Familiarity with Orchestration Frameworks such as Apache Airflow, Argo Workflows, Conductor etc. Experience working with and designing systems utilizing AWS Bloomberg is an equal opportunity employer and we value diversity at our company. We do not More ❯
problem solving skills We'd love to see: Knowledge of Database Systems and trade offs in the distributed systems Familiarity with API Designs Familiarity with Orchestration Frameworks such as Apache Airflow, Argo Workflows, Conductor etc. Experience working with and designing systems utilizing AWS Bloomberg is an equal opportunity employer and we value diversity at our company. We do not More ❯
data engineering tasks. Experience building and maintaining web scraping pipelines. Strong SQL skills, with expertise in performance tuning. Strong proficiency with dbt for data transformations. Hands-on experience with Apache Airflow or Prefect. Proficiency with GitHub, GitHub Actions, and CI/CD pipelines. Nice to have: Experience with GCP (BigQuery, Dataflow, Composer, Pub/Sub) or AWS. Familiarity with More ❯
Newbury, Berkshire, England, United Kingdom Hybrid / WFH Options
Intuita
Azure Storage, Medallion Architecture, and working with data formats such as JSON, CSV, and Parquet. • Strong understanding of IT concepts, including security, IAM, Key Vault, and networking. • Exposure to Apache Airflow and DBT is a bonus. • Familiarity with agile principles and practices. • Experience with Azure DevOps pipelines. The "Nice to Haves": • Certification in Azure or related technologies. • Experience with More ❯
field. Technical Skills Required Hands-on software development experience with Python and experience with modern software development and release engineering practices (e.g. TDD, CI/CD). Experience with Apache Spark or any other distributed data programming frameworks. Comfortable writing efficient SQL and debugging on cloud warehouses like Databricks SQL or Snowflake. Experience with cloud infrastructure like AWS or More ❯
Sterling, Virginia, United States Hybrid / WFH Options
Progression Inc
Active IRS - MBI is Required (badging could take 1-6 months without) 5+ years of Data Engineer/Software Development experience 5+ years Python and Java 5+ years with Apache Spark (Spark SQL, Spark Streaming, data transformations, performance optimization, and tuning) 5+ years experience with cloud-based data platforms (Databricks or AWS EMR) Proficiency with Git and version control More ❯