other Financial instruments Strong SQL experience Strong Python experience (PySpark, Pandas, Jupyter Notebooks etc.) Airflow/Algo for workflow management Git Any exposure to compressed file formats such as Parquet or HDF5 highly advantageous Linux/Bash skills highly desirable Please apply ASAP for more information. More ❯
london (city of london), south east england, united kingdom
Hunter Bond
other Financial instruments Strong SQL experience Strong Python experience (PySpark, Pandas, Jupyter Notebooks etc.) Airflow/Algo for workflow management Git Any exposure to compressed file formats such as Parquet or HDF5 highly advantageous Linux/Bash skills highly desirable Please apply ASAP for more information. More ❯
with 2+ years in Microsoft Fabric or related Microsoft Data Stack. Expertise in Power BI Datasets, Semantic Models, and Direct Lake optimization. Strong understanding of Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric More ❯
with 2+ years in Microsoft Fabric or related Microsoft Data Stack. Expertise in Power BI Datasets, Semantic Models, and Direct Lake optimization. Strong understanding of Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric More ❯
with 2+ years in Microsoft Fabric or related Microsoft Data Stack. Expertise in Power BI Datasets, Semantic Models, and Direct Lake optimization. Strong understanding of Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric More ❯
with 2+ years in Microsoft Fabric or related Microsoft Data Stack. Expertise in Power BI Datasets, Semantic Models, and Direct Lake optimization. Strong understanding of Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric More ❯
london (city of london), south east england, united kingdom
Capgemini
with 2+ years in Microsoft Fabric or related Microsoft Data Stack. Expertise in Power BI Datasets, Semantic Models, and Direct Lake optimization. Strong understanding of Lakehouse architecture, Delta/Parquet formats, and data governance tools. Experience integrating Fabric with Azure-native services (e.g., Azure AI Foundry, Purview, Entra ID). Familiarity with Generative AI, RAG-based architecture, and Fabric More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Tank Recruitment
on experience with Azure cloud technologies: Synapse, Fabric, AzureML, ADX, ADF, Azure Data Lake Storage, Event Hubs. Experience with visualisation tools such as Power BI and Streamlit. Familiarity with Parquet and Delta Parquet formats. Strong data modelling and architecture knowledge across SQL and NoSQL databases. Understanding of the software development lifecycle and ML-Ops. Knowledge of advanced statistical More ❯
Job Description Like the look of this opportunity Make sure to apply fast, as a high volume of applications is expected Scroll down to read the complete job description. COMPANY is seeking a *Data Engineer* to join a cutting-edge More ❯
this is preferred. Back End: Node/Python – Node is used for application back end whereas Python is used for Data, APIs, Workflows etc. Database – Postgres/DuckDB/Parquet Public cloud experience Please note: As they scale the business, they are looking for this person to be a Leader within the business. Therefore, want someone to be visible More ❯
this is preferred. Back End: Node/Python – Node is used for application back end whereas Python is used for Data, APIs, Workflows etc. Database – Postgres/DuckDB/Parquet Public cloud experience Please note: As they scale the business, they are looking for this person to be a Leader within the business. Therefore, want someone to be visible More ❯
this is preferred. Back End: Node/Python – Node is used for application back end whereas Python is used for Data, APIs, Workflows etc. Database – Postgres/DuckDB/Parquet Public cloud experience Please note: As they scale the business, they are looking for this person to be a Leader within the business. Therefore, want someone to be visible More ❯
this is preferred. Back End: Node/Python – Node is used for application back end whereas Python is used for Data, APIs, Workflows etc. Database – Postgres/DuckDB/Parquet Public cloud experience Please note: As they scale the business, they are looking for this person to be a Leader within the business. Therefore, want someone to be visible More ❯
Database, and PostgreSQL. Supporting innovation efforts by exploring new technologies such as vector databases to enable search and AI use cases. Using big data technologies like Kafka, Iceberg, and Parquet, along with managed databases including PostgreSQL and Oracle vector databases. Operating, monitoring, and maintaining Oracle Cloud infrastructure to ensure backend services are highly available, scalable, and secure. Collaborating with … NodeJS, Django, and FastAPI. Experience building flexible APIs using GraphQL. Expertise in at least one cloud platform and its managed data services. Familiarity with big data technologies such as Parquet, Iceberg, and streaming platforms like Kafka. Strong knowledge of database systems, SQL data model design, and query optimization. Experience with containerization using Kubernetes and Docker. Proven ability to deliver More ❯
Silver/Gold layers). Build and optimize real-time and batch data pipelines leveraging Apache Spark, Kafka, and AWS Glue/EMR. Architect storage and processing layers using Parquet and Iceberg for schema evolution, partitioning, and performance optimization. Integrate AWS data services (S3, Redshift, Lake Formation, Kinesis, Lambda, DynamoDB) into enterprise solutions. Ensure data governance, lineage, cataloging, and … Kinesis, Lake Formation, DynamoDB). Expertise in Apache Kafka (event streaming) and Apache Spark (batch and streaming). Proficiency in Python for data engineering and automation. Strong knowledge of Parquet, Iceberg, and Medallion Architecture. Finance & Capital Markets Knowledge Experience with trading systems, market data feeds, risk analytics, and regulatory reporting. Familiarity with time-series data, reference/master data More ❯
ll Do Build and maintain high-performance tick data pipelines for ingesting, processing, and storing large volumes of market data. Work with time-series databases (e.g., KDB, OneTick) and Parquet-based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale … focus on market data systems. Strong Python skills and familiarity with cloud platforms (AWS, GCP, or Azure). Experience with tick data and building tick data pipelines. Proficiency with Parquet-based file storage; Iceberg experience is a plus. Familiarity with Kubernetes, containerization, and modern orchestration tools. Experience with time-series databases (KDB, OneTick) and C++ is a plus. Strong More ❯
ll Do Build and maintain high-performance tick data pipelines for ingesting, processing, and storing large volumes of market data. Work with time-series databases (e.g., KDB, OneTick) and Parquet-based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale … focus on market data systems. Strong Python skills and familiarity with cloud platforms (AWS, GCP, or Azure). Experience with tick data and building tick data pipelines. Proficiency with Parquet-based file storage; Iceberg experience is a plus. Familiarity with Kubernetes, containerization, and modern orchestration tools. Experience with time-series databases (KDB, OneTick) and C++ is a plus. Strong More ❯
london (city of london), south east england, united kingdom
Selby Jennings
ll Do Build and maintain high-performance tick data pipelines for ingesting, processing, and storing large volumes of market data. Work with time-series databases (e.g., KDB, OneTick) and Parquet-based file storage to optimize data access and retrieval. Design scalable cloud-native solutions (AWS preferred) for market data ingestion and distribution. (Bonus) Integrate Apache Iceberg for large-scale … focus on market data systems. Strong Python skills and familiarity with cloud platforms (AWS, GCP, or Azure). Experience with tick data and building tick data pipelines. Proficiency with Parquet-based file storage; Iceberg experience is a plus. Familiarity with Kubernetes, containerization, and modern orchestration tools. Experience with time-series databases (KDB, OneTick) and C++ is a plus. Strong More ❯
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone’s chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We More ❯
watford, hertfordshire, east anglia, united kingdom
Adecco
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone’s chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We More ❯
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone’s chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We More ❯