that address complex business requirements and drive decision-making. Your skills and experience Proficiency with AWS Tools: Demonstrable experience using AWS Glue, AWS Lambda, Amazon Kinesis, Amazon EMR , AmazonAthena, Amazon DynamoDB, Amazon Cloudwatch, Amazon SNS and AWS Step Functions. Programming Skills: Strong More ❯
engineering background Exposure to building or deploying AI/ML models into a production environment Previously used AWS data services e.g. S3, Kinesis, Glue, Athena, DynamoDB, SNS/SQS Experience using any data streaming technologies/paradigms for real-time or near-real time analytics More ❯
engineering background Exposure to building or deploying AI/ML models into a production environment Previously used AWS data services e.g. S3, Kinesis, Glue, Athena, DynamoDB, SNS/SQS Experience using any data streaming technologies/paradigms for real-time or near-real time analytics More ❯
you if you have Proven experience as a Data Engineering using SQL and Python. Previous experience with data lakes in AWS, Glue Catalog and Athena (or equivalent) Good understanding of Spark, optimisation and performance tuning Capable of using popular data modelling tools to create a diagram of proposed tables More ❯
Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS - Aurora/Redshift/Athena/S3) Support users and operational flows for quantitative risk, senior management and portfolio management teams using the tools developed Qualifications/Skills Required More ❯
stack, including (but not limited to) spark/hadoop, kafka, Aerospike/Dynamodb Experience with AWS tech stack, including but not limited to EMR, Athena, EKS Expert knowledge of multi-threading, memory model, etc. Understanding of database fundamentals and MySQL knowledge Experience with CICD tools such as Jenkins, Graphite More ❯
and automated deployments. Have excellent knowledge of AWS services (ECS, IAM, EC2, S3, DynamoDB, MSK). Our Technology Stack Python and Scala Starburst and Athena Kafka and Kinesis DataHub ML Flow and Airflow Docker and Terraform Kafka, Spark, Kafka Streams and KSQL DBT AWS, S3, Iceberg, Parquet, Glue and More ❯
a live environment. Data Storage Solutions: Familiarity with data lakes, warehousing, and other data storage patterns. Advanced Tools: Experience with tools like Kafka, Jenkins, Athena, or Spark. This role will require 4 days per week onsite in the office, this is not optional and you must be open to More ❯
a live environment. Data Storage Solutions: Familiarity with data lakes, warehousing, and other data storage patterns. Advanced Tools: Experience with tools like Kafka, Jenkins, Athena, or Spark. This role will require 4 days per week onsite in the office, this is not optional and you must be open to More ❯
a live environment. Data Storage Solutions: Familiarity with data lakes, warehousing, and other data storage patterns. Advanced Tools: Experience with tools like Kafka, Jenkins, Athena, or Spark. This role will require 4 days per week onsite in the office, this is not optional and you must be open to More ❯
optimizing data delivery, re-designing infrastructure for greater scalability and performance. Essential Skills and Experience: Hands-on experience with AWS services, including Lambda, Glue, Athena, RDS, and S3. Strong SQL skills for data transformation, cleaning, and loading. Strong coding experience with Python and Pandas. Experience of data pipeline and More ❯
london, south east england, united kingdom Hybrid / WFH Options
Datatech Analytics
optimizing data delivery, re-designing infrastructure for greater scalability and performance. Essential Skills and Experience: Hands-on experience with AWS services, including Lambda, Glue, Athena, RDS, and S3. Strong SQL skills for data transformation, cleaning, and loading. Strong coding experience with Python and Pandas. Experience of data pipeline and More ❯
easy and safe for teams to use and contribute to data systems. You'll work with services like Lambda, S3, LakeFormation, Glue, Step Functions, Athena, EventBridge, SNS, SQS, and DynamoDB, and will be expected to navigate and manage data systems with a high degree of rigour and compliance. Familiarity More ❯
with DBT, building and maintaining modular, scalable data models that follow best practices Strong understanding of dimensional modelling Familiarity with AWS data services (S3, Athena, Glue) Experience with Airflow for scheduling and orchestrating workflows Experience working with data lakes or modern data warehouses (Snowflake, Redshift, BigQuery) A pragmatic problem More ❯
good to have Familiarity in agile methodologies, JIRA for project management, shared codebases (Git), and collaborative workflows. Experience with AWS services, including S3 and Athena Understanding of Command Line Interface (CLI) for managing cloud or local environments What we offer Benefits We offer excellent benefits including an incentive programme More ❯
easy and safe for teams to use and contribute to data systems. You’ll work with services like Lambda, S3, LakeFormation, Glue, Step Functions, Athena, EventBridge, SNS, SQS, and DynamoDB, and will be expected to navigate and manage data systems with a high degree of rigour and compliance. Familiarity More ❯
easy and safe for teams to use and contribute to data systems. You’ll work with services like Lambda, S3, LakeFormation, Glue, Step Functions, Athena, EventBridge, SNS, SQS, and DynamoDB, and will be expected to navigate and manage data systems with a high degree of rigour and compliance. Familiarity More ❯
easy and safe for teams to use and contribute to data systems. You’ll work with services like Lambda, S3, LakeFormation, Glue, Step Functions, Athena, EventBridge, SNS, SQS, and DynamoDB, and will be expected to navigate and manage data systems with a high degree of rigour and compliance. Familiarity More ❯
Engineering colleagues. What you'll need Required: Experience designing and building high throughput, scalable, resilient, and secure data pipelines. Experience with AWS technologies: Glue, Athena, S3, Step Functions, Lambda, RDS (Aurora Postgres), DMS, Redshift, QuickSight, Kinesis Firehose. Expert knowledge of SQL. Hands-on experience with Kafka. Hands-on experience More ❯
SQL skills. In addition, you will also have a strong desire to work with Docker, Kubernetes, Airflow and the AWS data technologies such as Athena, Redshift, EMR and various other tools in the AWS ecosystem. You would be joining a team of 25+ engineers across mobile, web, data and More ❯
these can be practically applied. Experience with Python or other scripting languages Good working knowledge of the AWS data management stack (RDS, S3, Redshift, Athena, Glue, QuickSight) or Google data management stack (Cloud Storage, Airflow, Big Query, DataPlex, Looker) About Our Process We can be flexible with the structure More ❯
robust data pipelines using AWS native services (Glue, Lambda, Step Functions, S3, etc.) Develop and optimize data lake and data warehouse solutions using Redshift, Athena, and related technologies Collaborate with data scientists, analysts, and business stakeholders to understand data requirements Ensure data quality, governance, and compliance with financial regulations More ❯
Design & build scalable, secure, and resilient data pipelines Own projects end-to-end — from design to production Work with modern AWS tools (Glue, Lambda, Athena, S3, Redshift, etc.) Contribute to platform strategy and mentor teammates Collaborate across engineering, product, architecture, and security What you bring: Proven experience building high More ❯