Bristol, Avon, England, United Kingdom Hybrid / WFH Options
MBDA
various exchange and processing techniques (ETL, ESB, API). Lead the way in delivering Agile methodologies for successful and timely project delivery. Leverage strong database skills (SQL, NoSQL, and Parquet) for efficient data storage and management. What we're looking for from you: Proficiency in Data Science techniques, including statistical models and ML algorithms. Expertise in NLP, with a … keen understanding of LLM and RAG technologies. Strong development capabilities, particularly in Python. Experience with data exchange, processing, and storage frameworks (ETL, ESB, API, SQL, NoSQL, and Parquet). Comfort with Agile development methodologies. Excellent teamwork and communication skills, with a talent for translating technical concepts into actionable insights for non-specialists. Ability to influence company decision-makers and More ❯
our datalake platform Kubernetes for data services and task orchestration Terraform for infrastructure Streamlit for data applications Airflow purely for job scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering More ❯
our datalake platform Kubernetes for data services and task orchestration Terraform for infrastructure Streamlit for data applications Airflow purely for job scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering More ❯
Log Analytics, Serverless Architecture, ARM Templates. Strong proficiency in Spark, SQL, and Python/scala/Java. Experience in building Lakehouse architecture using open-source table formats like delta, parquet and tools like jupyter notebook. Strong notions of security best practices (e.g., using Azure Key Vault, IAM, RBAC, Monitor etc.). Proficient in integrating, transforming, and consolidating data from More ❯
data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data architectures that support batch and streaming paradigms Experience with standards such as JSON, XML, YAML, Avro, Parquet Strong communication skills Open to learning new technologies, methodologies, and skills As the successful Data Engineering Manager you will be responsible for: Building and maintaining data pipelines Identifying and More ❯
of logic, functions, performance and delivery Extensive database knowledge and ability to manage relational data servers such as: MySQL, Microsoft SQL server, Postgres Legacy formats, e.g. CSV, JSON, XML, Parquet T-SQL (relational, queries, joins, procedures, performance) Familiarity with Python and the Pandas library or similar Familiarity with RESTful and SOAP APIs Ability to build and execute ETL processes More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Anson McCade
hands-on AWS experience – S3, Redshift, Glue essential. Proven experience building ETL/ELT pipelines in cloud environments. Proficient in working with structured/unstructured data (JSON, XML, CSV, Parquet). Skilled in working with relational databases and data lake architectures. Experienced with Matillion and modern data visualisation tools (QuickSight, Tableau, Looker, etc.). Strong scripting and Linux/ More ❯
our datalake platform Kubernetes for data services and task orchestration Terraform for infrastructure Streamlit for data applications Airflow purely for job scheduling and tracking Circle CI for continuous deployment Parquet and Delta file formats on S3 for data lake storage Spark for data processing DBT for data modelling SparkSQL for analytics Why else you'll love it here Wondering More ❯
Pyspark 3/4. - Experience with Python Behave for Behaviour Driven Development and testing. - Familiarity with Python Coverage for code coverage analysis. - Strong knowledge of Databricks, specifically with delta parquet data format and medallion data architecture. - In-depth understanding of YAML. - Familiarity with Azure DevOps and its functionalities. - Knowledge of Git and best practises for code release is advantageous. More ❯
hands-on AWS experience – S3, Redshift, Glue essential. Proven experience building ETL/ELT pipelines in cloud environments. Proficient in working with structured/unstructured data (JSON, XML, CSV, Parquet). Skilled in working with relational databases and data lake architectures. Experienced with Matillion and modern data visualisation tools (QuickSight, Tableau, Looker, etc.). Strong scripting and Linux/ More ❯
With solid software engineering fundamentals, fluent in Java and Python (Rust is a plus). Knowledgeable about data lake systems like Athena, and big data storage formats such as Parquet, HDF5, ORC, focusing on data ingestion. Driven by working in an intellectually engaging environment with top industry minds, where constructive debates are encouraged. Excited about working in a start More ❯
processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working with large, complex datasets and APIs across formats (CSV, JSON, Parquet, etc) Familiarity with workflow automation tools (eg, Power Automate) and/or Power Apps is desirable Excellent interpersonal and communication skills with the ability to work cross-functionally and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working with large, complex datasets and APIs across formats (CSV, JSON, Parquet, etc) Familiarity with workflow automation tools (eg, Power Automate) and/or Power Apps is desirable Excellent interpersonal and communication skills with the ability to work cross-functionally and More ❯
similar) Interest in distributed systems, database internals, or storage engines Product sense: you care about how infrastructure gets used Bonus if you've worked with tech like Apache Arrow, Parquet, DataFusion, Clickhouse, DuckDB What's on offer: £120k-£150k base + meaningful equity Full-time, on-site in Shoreditch (Monday-Friday) A chance to do foundational work with real More ❯
About the role Taktile is a high-growth, post product-market-fit start-up, on a fast trajectory to becoming market leader in the field of automated decisioning. We are looking for a Full-stack Engineer to join the Decide More ❯
Data modelling (building optimised and efficient data marts and warehouses in the cloud) Work with Infrastructure as code (Terraform) and containerising applications (Docker) Work with AWS, S3, SQS, Iceberg, Parquet, Glue and EMR for our Data Lake Experience developing CI/CD pipelines More information: Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy More ❯
tools (QuickSight, Power BI, Tableau, Looker, etc.) Interest or experience in building internal data communities or enablement programs Working with diverse data sources (APIs, CRMs, SFTP, databases) and formats (Parquet, JSON, XML, CSV) Exposure to machine learning models or AI agents Why Join Us Help shape the future of data in an organization that treats data as a product More ❯
Cardiff, South Glamorgan, United Kingdom Hybrid / WFH Options
RVU Co UK
Experience with alternative data technologies (e.g. duckdb, polars, daft). Familiarity with eventing technologies (Event Hubs, Kafka etc ). Deep understanding of file formats and their behaviour such as parquet, delta and iceberg. What we offer We want to give you a great work environment; contribute back to both your personal and professional development; and give you great benefits More ❯
South East London, London, United Kingdom Hybrid / WFH Options
Datatech Analytics
processing and automation Solid understanding of ETL/ELT workflows, data modelling, and structuring datasets for analytics Experience working with large, complex datasets and APIs across formats (CSV, JSON, Parquet, etc) Familiarity with workflow automation tools (eg, Power Automate) and/or Power Apps is desirable Excellent interpersonal and communication skills with the ability to work cross-functionally and More ❯
Liverpool, Lancashire, United Kingdom Hybrid / WFH Options
Intuita - Vacancies
All our office locations considered: Newbury & Liverpool (UK); Šibenik, Croatia (considered) We're on the hunt for builders . No, we've not ventured into construction in our quest to conquer the world, rather a designer and builder of systems More ❯
Bonus Points For Workflow orchestration tools like Airflow. Working knowledge of Kafka and Kafka Connect. Experience with Delta Lake and lakehouse architectures. Proficiency in data serialization formats: JSON, XML, PARQUET, YAML. Cloud-based data services experience. Ready to build the future of data? If you're a collaborative, forward-thinking engineer who wants to work on meaningful, complex problems More ❯
Bonus Points For Workflow orchestration tools like Airflow. Working knowledge of Kafka and Kafka Connect. Experience with Delta Lake and lakehouse architectures. Proficiency in data serialization formats: JSON, XML, PARQUET, YAML. Cloud-based data services experience. Ready to build the future of data? If you're a collaborative, forward-thinking engineer who wants to work on meaningful, complex problems More ❯
AWS serverless services and enables powerful querying and analytics through Amazon Athena. In this role, you'll work on a system that combines streaming ingestion (Firehose), data lake technologies (Parquet, Apache Iceberg), scalable storage (S3), event-driven processing (Lambda, EventBridge), fast access databases (DynamoDB), and robust APIs (Spring Boot microservices on EC2). Your role will involve designing, implementing … processing pipeline and platform services. Key Responsibilities: Design, build, and maintain serverless data processing pipelines using AWS Lambda, Firehose, S3, and Athena. Optimize data storage and querying performance using Parquet and Iceberg formats. Manage and scale event-driven workflows using EventBridge and Lambda. Work with DynamoDB for fast, scalable key-value storage. Develop and maintain Java Spring Boot microservices … Java backend development experience. 3+ years of Python development. Strong hands-on experience with AWS services: Lambda, S3, K8S. Deep understanding of data lake architectures and formats such as Parquet and Iceberg. Proficiency in Spring Boot and working experience with microservices. Experience with high-scale, event-driven systems and serverless patterns. Nice to Have: Solid understanding of distributed systems More ❯
services 5+ Years of overall software engineering experience Experience with tech stack including: Language: Python, Golang Platform: AWS Framework: Django, Spark Storage/Data Pipelines: Postgres, Redis, ElasticSearch, Kafka, Parquet Nice To Have Prior exposure to production machine learning systems. More ❯