San Antonio, Texas, United States Hybrid / WFH Options
IAMUS
Tools: Proficiency with Maven and GitLab. Data Formats: Familiarity with JSON, XML, SQL, and compressed file formats. Configuration Files: Experience using YAML files for data model and schema configuration. Apache NiFi: Significant experience with NiFi administrationand building/troubleshooting data flows. AWS S3: bucket administration. IDE: VSCode, Intellij/Pycharm, or other suitable Technical Expertise : ETL creation and processing … experience in cyber/network security operations. Familiarity with Agile environments. Good communication skills. Developed documentation and training in areas of expertise. Amazon S3, SQS/SNS Admin experience ApacheAirflow Workloads via UI or CLI a plus Experience with Mage AI a plus Kubernetes, Docker More ❯
TransferGo. Our products are recognised by industry leaders like Gartner's Magic Quadrant, Forrester Wave and Frost Radar. Our tech stack: Superset and similar data visualisation tools. ETL tools: Airflow, DBT, Airbyte, Flink, etc. Data warehousing and storage solutions: ClickHouse, Trino, S3. AWS Cloud, Kubernetes, Helm. Relevant programming languages for data engineering tasks: SQL, Python, Java, etc. What you … methodologies. Collaborating with stakeholders to define data strategies, implement data governance policies, and ensure data security and compliance. About you: Strong technical proficiency in data engineering technologies, such as ApacheAirflow, ClickHouse, ETL tools, and SQL databases. Deep understanding of data modeling, ETL processes, data integration, and data warehousing concepts. Proficiency in programming languages commonly used in data More ❯
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
Skills: Proven expertise in designing, building, and operating data pipelines, warehouses, and scalable data architectures. Deep hands-on experience with modern data stacks. Our tech includes Python, SQL, Snowflake, Apache Iceberg, AWS S3, PostgresDB, Airflow, dbt, and Apache Spark, deployed via AWS, Docker, and Terraform. Experience with similar technologies is essential. Coaching & Growth Mindset: Passion for developing More ❯
containerization and CI/CD tools (e.g., Docker, GitHub Actions). Knowledge of networking and cloud infrastructure (e.g., AWS, Azure). Experience with modern data processing frameworks (e.g., dbt, ApacheAirflow, Spark, or similar). Requirements A strong focus on system observability and data quality. Emphasis on rapid scalability of solutions ( consider market ramp up when entering a More ❯
in data platform evolution Has experience (or strong interest) in building real-time or event-driven architectures ️ Modern Data Stack Includes: Python , SQL Snowflake , Postgres AWS (S3, ECS, Terraform) Airflow , dbt , Docker Apache Spark , Iceberg What they're looking for: Solid experience as a Senior/Lead/Principal Data Engineer, ideally with some line management or mentoring More ❯
Computer Science, Engineering, or a related field, or equivalent industry experience. Preferred Qualifications Experience or interest in mentoring junior engineers. Familiarity with data-centric workflows and pipeline orchestration (e.g., ApacheAirflow). Proficiency in data validation, anomaly detection, or debugging using tools like Pandas, Polars, or data.table/R. Experience working with AWS or other cloud platforms. Knowledge More ❯
with interface/API data modeling. Knowledge of CI/CD tools like GitHub Actions or similar. AWS certifications such as AWS Certified Data Engineer. Knowledge of Snowflake, SQL, ApacheAirflow, and DBT. Familiarity with Atlan for data cataloging and metadata management. Understanding of iceberg tables. Who we are: We're a global business empowering local teams with More ❯
Experience with rapid prototype creation and deployment is a must. Comfortable working under tight deadlines as necessary. Possible Tech Stack: Programming Languages: Python, Java, or Go. Data Engineering Tools: Apache Kafka, Airflow (for orchestration), Spark (if needed for larger datasets). OpenSearch/Elasticsearch: Indexing, querying, and optimizing. Visualization Tools: Kibana, Grafana (for more advanced visualizations), React.js. Cloud More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
Noblis
engineering, mathematics, and physics. 5+ years of professional work experience. Programming knowledge in some of the following languages: Python, SQL, Java. Strong experience with workflow orchestration tools such as ApacheAirflow or Prefect. Database experience with some of the following products: Elasticsearch, PostgreSQL, MySQL, Oracle, MongoDB Experience working with Docker, Kubernetes Proficient with git and pull request workflows. More ❯
infrastructure. You will have good software development experience with Python coupled with strong SQL skills. In addition, you will also have a strong desire to work with Docker, Kubernetes, Airflow and the AWS data technologies such as Athena, Redshift, EMR and various other tools in the AWS ecosystem. You would be joining a team of 25+ engineers across mobile … skills Familiarity with continuous integration, unit testing tools and related practices Understanding of Agile Scrum software development lifecycle What you'll be doing: Implementing and maintaining ETL pipelines using Airflow & AWS technologies Contributing to data-driven tools owned by the data engineering team, including content personalisation Responsibility of ingestion framework and processes Helping monitor and look after our data More ❯
Maintain and develop data warehouses Provide guidance and mentorship to junior team members To be successful in this role you will have. Extensive experience with Snowflake Experience with DBT, Airflow or Python Cloud Data Engineering experience with AWS/Azure/GCP This is a hybrid role based from the companies London office with some of the benefits including More ❯
Maintain and develop data warehouses Provide guidance and mentorship to junior team members To be successful in this role you will have. Extensive experience with Snowflake Experience with DBT, Airflow or Python Cloud Data Engineering experience with AWS/Azure/GCP This is a hybrid role based from the companies London office with some of the benefits including More ❯
Science, Software Engineering, or a related field. Extensive experience with a broad range of data technologies, platforms, and languages, such as SQL, Python, Java, and data orchestration tools like Apache Airflow. Demonstrated expertise in modern data architecture principles, including data lakes, data warehousing, ETL processes, and real-time data processing. Proficiency in Agile methodologies tailored for data-centric projects More ❯
Out in Science, Technology, Engineering, and Mathematics
modern data libraries (e.g. Pandas, PySpark, Dask). Strong SQL skills and experience with cloud-native data tools (AWS, GCP, or Azure). Hands-on experience with tools like Airflow, Spark, Kafka, or Snowflake. Experience working with unstructured data, NLP pipelines, and time-series databases. Familiarity with deploying AI/ML models and supporting MLOps workflows. Interest in or More ❯
managing databases (we use Elasticsearch/MongoDB/PostgreSQL). Experience with SQL. Experience with data versioning tools. Experience developing and maintaining data infrastructure for ETL pipelines, such as Apache Airflow. EPIC JOB + EPIC BENEFITS = EPIC LIFE We pay 100% for benefits except for PMI (for dependents). Our current benefits package includes pension, private medical insurance, health More ❯
Excellent problem-solving and analytical skills Strong communication and collaboration abilities to work effectively with cross-functional teams. Nice to Haves: Experience with data pipeline orchestration tools such as Apache Airflow. Knowledge of data streaming technologies like Kafka. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data processing services. Exposure to data visualization tools and techniques More ❯
plenty of incredible developers at Octopus Energy who are willing to teach if you're willing to learn! Required experience: Python Git Nice to have: SQL dbt Github CircleCI Airflow Kubernetes Terraform A Cloud warehouse provider e.g. Databricks, GCP, Snowflake AWS We aren't necessarily looking for someone who is "10-out-of-10" in all these areas; but More ❯
to talk to you if: You've led technical delivery of data engineering projects in a consultancy or client-facing environment You're experienced with Python, SQL, .NET, dbt, Airflow and cloud-native data tools (AWS, GCP or Azure) You have strong knowledge of data architecture patterns - including Lakehouse and modern warehouse design (e.g. Snowflake, BigQuery, Databricks) You know More ❯
data modeling best practices and version controlling of software and configuration among development, integration, and production environments. Requirements: Experience with tools for data routing and transformation such as NiFi, Airflow, or dbt. Strong in Java, Python, or SQL ETL/ELT data pipelines development supporting data-driven applications. Proficient in scripting such as Linux shell, Python, Perl, Javascript, or More ❯
Out in Science, Technology, Engineering, and Mathematics
learn to name a few of the open-source libraries we use extensively. We implement the systems that require the highest data throughput in Java and C++. We use Airflow for workflow management, Kafka for data pipelines, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring, Docker and More ❯
Experience with Python or other scripting languages Good working knowledge of the AWS data management stack (RDS, S3, Redshift, Athena, Glue, QuickSight) or Google data management stack (Cloud Storage, Airflow, Big Query, DataPlex, Looker) About Our Process We can be flexible with the structure of our interview process if someone's circumstances or timescales require it but our general More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
Dept Holding B.V
architectures, data pipelines, and ETL processes Hands-on experience with cloud platforms (GCP, AWS, Azure) and their data-specific services Proficiency in Python, SQL, and data orchestration tools (e.g., Airflow, DBT) Experience with modern data warehouse technologies (BigQuery, Snowflake, Redshift, etc.) Strong understanding of data modeling, data governance, and data quality principles Excellent communication skills with the ability to More ❯
Cloud Composer, Cloud Run, Cloud Monitoring & Logging, Dataplex, Beam, Tentacles and Pub/Sub; Fluent Python, SQL skills with real life project experience; Experience on orchestration tools such as Airflow and DBT; Experience with one of major analytical DWHs is plus: BigQuery, Redshift, Snowflake, Databricks, Synapse; Work experience with following technologies are noteworthy to mention and might be seen More ❯
and well-tested solutions to automate data ingestion, transformation, and orchestration across systems. Own data operations infrastructure: Manage and optimise key data infrastructure components within AWS, including Amazon Redshift, ApacheAirflow for workflow orchestration and other analytical tools. You will be responsible for ensuring the performance, reliability, and scalability of these systems to meet the growing demands of … pipelines , data warehouses , and leveraging AWS data services . Strong proficiency in DataOps methodologies and tools, including experience with CI/CD pipelines, containerized applications , and workflow orchestration using ApacheAirflow . Familiar with ETL frameworks, and bonus experience with Big Data processing (Spark, Hive, Trino), and data streaming. Proven track record - You've made a demonstrable impact More ❯