fairness audits, and bias mitigation Familiarity with intelligent automation or RPA platforms Cloud platform experience (AWS, Azure, or GCP) and managed AI/ML services Data platform experience (Snowflake, Dagster) Experience collaborating with Data Analytics and Data Scientists to translate exploratory analysis and models into production-ready AI/ML solutions Experience leveraging and fine-tuning third-party or More ❯
Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. Authoring reports and dashboards with either open source or commercial products More ❯
Python and SQL; strong experience with data engineering libraries (e.g., Pandas, PySpark, Dask). Deep knowledge of ETL/ELT frameworks and orchestration tools (e.g., Airflow, Azure Data Factory, Dagster). Proficient in cloud platforms (preferably Azure) and services such as Data Lake, Synapse, Event Hubs, and Functions. Authoring reports and dashboards with either open source or commercial products More ❯
the portfolio. Key Requirements: Strong proficiency in Python and PySpark Successful track history in a Data Engineering role including Database Management (ideally SQL), Data Orchestration (ideally Apache Airflow or Dagster), Containerisation (ideally Kubernetes and Docker) .Data Pipelines (big data technologies and architectures) Experience in Financial Services (ideally Commodity) Trading. Bachelor's degree in Information Systems, Computer Science, Engineering, or More ❯
the portfolio. Key Requirements: Strong proficiency in Python and PySpark Successful track history in a Data Engineering role including Database Management (ideally SQL), Data Orchestration (ideally Apache Airflow or Dagster), Containerisation (ideally Kubernetes and Docker) .Data Pipelines (big data technologies and architectures) Experience in Financial Services (ideally Commodity) Trading. Bachelor's degree in Information Systems, Computer Science, Engineering, or More ❯
together data from all kinds of sources, whether that's batch files or real-time streams. You'll have set up and worked with ETL and ELT tools like Dagster, AWS Glue, Azure Data Factory, Airflow or dbt, and you can decide what tools are right for the job. You'll have an understanding of how Node.js and TypeScript More ❯
similar Prior experience or interest in working with geospatial data Technologies we use ️ Programming languages: SQL, Python, LookML, (+ Go for other backend services) Development tools and frameworks: dbt, dagster, Airbyte, dlt, data-diff, Elementary Data lake and warehouse: GCS, BigQuery Analytics: Looker, Looker Studio and geospatial analytics tools How we reward our team Dynamic working environment with a More ❯
BI tools such as Lightdash, Looker or Tableau Comfortable with version control, testing frameworks, and CI/CD in a data context Familiarity with Python and orchestration tools like Dagster or Airflow is highly desirable Experience with Ecommerce and/or Subscription services is desirable The Interview Process Meet & Greet Call with a Talent Partner Call with the Hiring More ❯
and rebuilding our data platform to ensure we stay ahead in a rapidly evolving landscape. You'll be hands-on, using the latest tools like Apache Arrow, Polars, and Dagster to craft a data architecture that aligns with our innovative investment strategies. In this position, you'll be deeply involved in building and optimizing data pipelines that are reliable More ❯
Defender suite, Microsoft Entra, Microsoft Purview and Azure security services (e.g. Key Vault, NSGs, WAF, etc.). Experience with Kubernetes (Azure Kubernetes Service) and data platforms (e.g. Databricks, Snowflake, Dagster). Proven understanding of security risk management. Excellent understanding of common security controls, in particular cloud security controls. Understanding of threat modelling. Knowledge of ISO 27001 and other commonly More ❯
At least 4 years of experience in industry working with the above tools and technologies. Nice to Have Experience with other parts of our tech stack (Clickhouse, Postgres, Datadog, Dagster, Temporal, AWS, Gitlab and RunAI) Any domain knowledge within Life Sciences and Biotech/Techbio, specifically around high throughput sequencing, ML models, and bioinformatics. What we can offer in More ❯
In details, the position encompasses duties and responsibilities as follows: An experienced Data Engineer is required for the Surveillance IT team to develop ingestion pipelines and frameworks across the application portfolio, supporting Trade Surveillance analysts with strategy and decision-making. More ❯
We are looking for brilliant engineers to join our team at Magentic. We're pushing the boundaries of AI with next-generation agentic systems that can manage entire workflows. We're focusing on a three trillion dollar market of supply More ❯
at the client's office, which is located in London. Key Responsibilities: Data Pipeline Development & Maintenance Build, maintain, and optimize scalable ETL/ELT pipelines using tools such as Dagster, or similar. Ensure high data availability, reliability, and consistency through rigorous data validation and monitoring practices. Collaborate with cross-functional teams to align data pipeline requirements with business objectives … experience in query optimization and data modelling. Strong programming skills in Python (preferred), with a focus on building scalable data solutions. Experience with data pipeline orchestration tools such as Dagster or similar. Familiarity with cloud platforms (e.g. AWS) and their data services (e.g., S3, Redshift, Snowflake). Understanding of data warehousing concepts and experience with modern warehousing solutions. Experience More ❯
algorithms in Python, applying best-in-class practices such as strict typing, modular design, and automated testing. Architect robust data cleaning pipelines and orchestrate workflows using tools such as Dagster within cloud-based CI/CD environments. Drive key system design and modelling decisions-carefully balancing speed, scalability, technical complexity, and business value. Partner with stakeholders to understand operational … learning, and operations research methods. Solid background in software engineering for data science products: version control (Git), testing (unit, regression, E2E), CI/CD (GitHub Actions), and orchestration (Airflow, Dagster). Proficient in SQL and cloud platforms (AWS preferred), with exposure to model/data versioning tools (e.g. DVC), containerised solutions (Docker, ECS), and experiment tracking (e.g. MLflow). More ❯
to join our clients data engineering team and help deliver mission-critical data pipelines . Day to day Work closely with our Tech Lead on data pipelines using Snowflake, Dagster, dbt, and Python Translate requirements from stakeholders into clear, actionable work for engineers Help ensure data is delivered quickly, reliably, and in an optimised way Experience Experience with data … pipelines, orchestration, and transformation tools Strong communicator who can work across technical and non-technical teams Knowledge of Snowflake, Dagster, or dbt More ❯