Looker, etc.) Interest or experience in building internal data communities or enablement programs Working with diverse data sources (APIs, CRMs, SFTP, databases) and formats (Parquet, JSON, XML, CSV) Exposure to machine learning models or AI agents Why Join Us Help shape the future of data in an organization that More ❯
solutions • Contribute to infrastructure automation using CI/CD practices and infrastructure as code • Work with various data formats including JSON, XML, CSV, and Parquet • Create and maintain metadata, data dictionaries, and schema documentation Required Experience: • Strong experience with data engineering in a cloud-first environment • Hands-on expertise More ❯
solutions • Contribute to infrastructure automation using CI/CD practices and infrastructure as code • Work with various data formats including JSON, XML, CSV, and Parquet • Create and maintain metadata, data dictionaries, and schema documentation Required Experience: • Strong experience with data engineering in a cloud-first environment • Hands-on expertise More ❯
with experience in Kafka Real-time messaging or Azure Stream Analytics/Event Hub. Spark processing and performance tuning. File formats partitioning for e.g. Parquet, JSON, XML, CSV. Azure DevOps, GitHub actions. Hands-on experience in at least one of Python with knowledge of the others. Experience in Data More ❯
and automation. Proficiency in building and maintaining batch and streaming ETL/ELT pipelines at scale, employing tools such as Airflow, Fivetran, Kafka, Iceberg, Parquet, Spark, Glue for developing end-to-end data orchestration leveraging on AWS services to ingest, transform and process large volumes of structured and unstructured More ❯
Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and EMR for our Data Lake Elasticsearch and DynamoDB More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from More ❯
Experience in data modelling and design patterns; in-depth knowledge of relational databases (PostgreSQL) and familiarity with data lakehouse formats (storage formats, e.g. ApacheParquet, Delta tables). Experience with Spark, Databricks, data lakes/lakehouses. Experience working with external data suppliers (defining requirements for suppliers, defining Service Level More ❯
Experience in data modelling and design patterns; in-depth knowledge of relational databases (PostgreSQL) and familiarity with data lakehouse formats (storage formats, e.g. ApacheParquet, Delta tables). Experience with Spark, Databricks, data lakes/lakehouses. Experience working with external data suppliers (defining requirements for suppliers, defining Service Level More ❯
Azure service bus, Function Apps, ADFs Possesses knowledge on data related technologies like - Data Warehouse, snowflake, ETL, Data pipelines, pyspark, delta tables, file formats - parquet, columnar Have a good understanding of SQL, stored procedures Be able to lead development and execution of performance and automation testing for large-scale More ❯
and experience working within a data driven organization Hands-on experience with architecting, implementing, and performance tuning of: Data Lake technologies (e.g. Delta Lake, Parquet, Spark, Databricks) API & Microservices Message queues, streaming technologies, and event driven architecture NoSQL databases and query languages Data domain and event data models Data More ❯
Good understanding of cloud environments (ideally Azure), distributed computing and scaling workflows and pipelines Understanding of common data transformation and storage formats, e.g. ApacheParquet Awareness of data standards such as GA4GH ( ) and FAIR ( ). Exposure of genotyping and imputation is highly advantageous Benefits: Competitive base salary Generous Pension More ❯
large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or More ❯
large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or More ❯
large-scale datasets. Implement and manage Lake Formation and AWS Security Lake , ensuring data governance, access control, and security compliance. Optimise file formats (e.g., Parquet, ORC, Avro) for S3 storage , ensuring efficient querying and cost-effectiveness. Automate infrastructure deployment using Infrastructure as Code (IaC) tools such as Terraform or More ❯
and Lambda IAM - Experience handling IAM resource permissions Networking - fundamental understanding of VPC, subnet routing and gateways Storage - strong understanding of S3, EBS and Parquet Databases - RDS, DynamoDB Experience doing cost estimation in Cost Explorer and planning efficiency changes Terraform and containerisation experience Understanding of a broad range of More ❯
and delivering customer proposals aligned with Analytics Solutions. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro, Parquet, Iceberg, Hudi). Experience developing software and data engineering code in one or more programming languages (Java, Python, PySpark, Node, etc). AWS and More ❯
new technologies and frameworks Nice to have: Knowledge of databases, SQL Familiarity with Boost ASIO Familiarity with data serialization formats such Apache Arrow/Parquet, Google Protocol Buffers, Flatbuffers Contra Experience with gRPC, and Websocket protocols Experience with Google Cloud/AWS and/or containerization in Docker/ More ❯
new technologies and frameworks Nice to have: Knowledge of databases, SQL Familiarity with Boost ASIO Familiarity with data serialization formats such Apache Arrow/Parquet, Google Protocol Buffers, Flatbuffers Contra Experience with gRPC, http/REST and Websocket protocols Experience with Google Cloud/AWS and/or containerization More ❯
new technologies and frameworks Nice to have: Knowledge of databases, SQL Familiarity with Boost ASIO Familiarity with data serialization formats such Apache Arrow/Parquet, Google Protocol Buffers, Flatbuffers Contra Experience with gRPC, http/REST and Websocket protocols Experience with Google Cloud/AWS and/or containerization More ❯
Market Insurance industry experience Using Data Modelling such as Erwin Experience with modelling methodologies including Kimball etc Usage of Data Lake formats such as Parquet and Delta Lake Strong SQL skills Rate: £600 - £700 P/D Outside IR35 Contract Duration: 6 months Location: London/WFH hybrid Start More ❯
Market Insurance industry experience Using Data Modelling such as Erwin Experience with modelling methodologies including Kimball etc Usage of Data Lake formats such as Parquet and Delta Lake Strong SQL skills Rate: £600 - £700 P/D Outside IR35 Contract Duration: 6 months Location: London/WFH hybrid Start More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Arthur Recruitment
Market Insurance industry experience Using Data Modelling such as Erwin Experience with modelling methodologies including Kimball etc Usage of Data Lake formats such as Parquet and Delta Lake Strong SQL skills Rate: £600 - £700 P/D Outside IR35 Contract Duration: 6 months Location: London/WFH hybrid Start More ❯
We are looking for a Data Engineer to join our growing data engineering team at Our Future Health. The Data Engineer will bring an in-depth knowledge of NHS data and data solutions to help solve some of the key More ❯