DuckDB). Experience with the modern data stack, building data ingestion pipelines and working with ETL and orchestration tools (e.g., Airflow, Luigi, Argo, dbt), big data technologies (Spark, Kafka, Parquet), and web frameworks for model serving (e.g. Flask or FastAPI). Data Science: Familiarity or experience with classical NLP techniques (BERT, topic modelling, summarisation), statistical analysis, and knowledge graphs. More ❯
technologies (e.g. S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc.) and knowledge of their associated trade-offs. Experience with multiple data formats and serialization systems (e.g. Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc.) Experience managing data pipeline orchestration systems (e.g. Kubernetes, Argo Workflows, Airflow, Prefect, Dagster, etc.) Proven experience in managing the operational aspects of More ❯
financial services or consulting Strong knowledge of relational and dimensional design (3NF, star, snowflake) Proficient in ERWin, Sparx, or similar tools Experience working with semi-structured data (JSON, XML, Parquet) Excellent communication and client-facing skills More ❯
role. Technical proficiency Strong econometric modelling skills using tools such as R, Python, or other statistical software packages (e.g., EViews, SAS) Experience with Python libraries such as Pandas and Parquet is a big plus Experience with different modelling approaches and methodologies e.g., linear, logarithmic linear etc Ability to handle large datasets, clean and transform raw data, and apply advanced More ❯