Milton Keynes, Buckinghamshire, England, United Kingdom Hybrid / WFH Options
Oscar Technology
for clients Present findings and recommendations to senior client stakeholders Tech Stack You'll Work With BI Tools : Power BI, Tableau, Looker, Google Data Studio Data Warehousing : Snowflake, BigQuery, Redshift, SQL Server Languages : SQL, DAX, Python (Pandas, NumPy, Scikit-learn - desirable) CRM Systems : Salesforce, HubSpot, Dynamics 365, Zoho ETL Tools : dbt, Fivetran, Talend, Alteryx AI/ML Frameworks (Desirable More ❯
both at the Board/Executive level and at the business unit level. Key Responsibilities Design, develop, and maintain scalable ETL pipelines using technologies like dbt, Airbyte, Cube, DuckDB, Redshift, and Superset Work closely with stakeholders across the company to gather data requirements and setup dashboards Promote a data driven culture at Notabene and train, upskill power-users across More ❯
ies (e. g ., Hadoop, Spark, Kafka). Proficiency in cloud platforms such as AWS, Azure, or Goo g le Cloud and cloud-based data services (e.g ., AWS Redshift, Azure Synapse Analytics, Goog le Bi g Query). Experience with DataOps practices and tools, includin g CI/CD for data pipelines. Excellent leadership, communication, and interpersonal skills More ❯
similar Proficient writing and maintaining bash scripts Experience writing concise and illustrative documentation Experience Microsoft Azure and Google Cloud Experience with Data Engineering and Analytics products such as Snowflake, Redshift, Google Analytics, Segment, ELK Stack Qualifications Bachelor's degree in computer science or equivalent experience combined with theoretical knowledge What's in it For You? Flexibility & Work-Life Balance More ❯
Leading and mentoring practice teams. Skills & Experience Required: 13-18 years of overall Data and Analytics experience. At least 10+ years in AWS data platform including AWS S3, Glue, Redshift, Athena, Sagemaker, Quicksight, and MLOPS. Expertise in Snowflake DWH architecture, Snowpipe, Data Sharing, Polaris catalog, and data governance. Knowledge of additional technologies such as Python, Streamlit, Matillion, DBT, Atlan More ❯
of data architecture principles and how these can be practically applied. Experience with Python or other scripting languages Good working knowledge of the AWS data management stack (RDS, S3, Redshift, Athena, Glue, QuickSight) or Google data management stack (Cloud Storage, Airflow, Big Query, DataPlex, Looker) About Our Process We can be flexible with the structure of our interview process More ❯
a strong team player A quick learner, able to pick up new skills and technologies easily and enjoys working on complex issues Experienced in wealth management - beneficial Experience with Redshift, dbt, python and Google Looker - beneficial This role isn't for you if You rely on a lot of top-down direction. Here, you'll have a lot of More ❯
Wilmslow, England, United Kingdom Hybrid / WFH Options
The Citation Group
stack with robust, pragmatic solutions. Responsibilities develop, and maintain ETL/ELT data pipelines using AWS services Data, Databricks and dbt Manage and optimize data storage solutions such as Amazon S3, Redshift, RDS, and DynamoDB. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or AWS CloudFormation. Monitor and optimize the performance, cost, and scalability of More ❯
prepare to adopt Apache Iceberg for greater performance and flexibility. You'll address high-performance data workloads, ensuring seamless execution of massive queries, including 600+ billion-row queries in Redshift, by designing and maintaining robust, scalable infrastructure. Our flat structure means you will directly contribute to our strategy while taking ownership of diverse projects utilizing the latest technologies. What More ❯
Cardiff, South Glamorgan, Wales, United Kingdom Hybrid / WFH Options
Octad Recruitment Consultants (Octad Ltd )
IaaS ? PaaS), including Infra as Code. Strong SQL skills and proficiency in Python or PySpark . Built or maintained data lakes/warehouses using Synapse , Fabric , Databricks , Snowflake , or Redshift . Experience hardening cloud environments (NSGs, identity, Defender). Demonstrated automation of backups, CI/CD deployments, or DR workflows. Nice-to-Haves: Experience with Azure OpenAI , vector databases More ❯
with data scientists and analysts on several projects Requirements Knowledge of programming languages (e.g. Java, Python) Hands-on experience with relational databases (like PostgreSQL) and data warehouses (like AWS Redshift) Familiar with data modeling and data governance concepts, and agile methodologies Familiarity with industry toolsets: Stitch, DMS, etc Statistics knowledge, analytical skills, and an understanding of big data technologies More ❯
Broad experience in AWS Cloud technology including management and governance tools (e.g. Lambda, Auto Scaling, VPC, EC2, KMS, IAM, CloudFormation, CloudWatch, CloudTrail, S3, DynamoDB, RDS, Glue, Athena, Lake Formation, Redshift Experience supporting analytics use cases from ML Ops and data hydration perspective Ability to drive the projects technically to completion, identify risks, costs , challenge architecture and long-term sustainability More ❯
Broad experience in AWS Cloud technology including management and governance tools (e.g. Lambda, Auto Scaling, VPC, EC2, KMS, IAM, CloudFormation, CloudWatch, CloudTrail, S3, DynamoDB, RDS, Glue, Athena, Lake Formation, Redshift Experience supporting analytics use cases from ML Ops and data hydration perspective Ability to drive the projects technically to completion, identify risks, costs , challenge architecture and long-term sustainability More ❯
if you have: Expertise in Cloud-Native Data Engineering: 3+ years building and running data pipelines in AWS or Azure, including managed data services (e.g., Kinesis, EMR/Databricks, Redshift, Glue, Azure Data Lake). Programming Mastery: Advanced skills in Python or another major language; writing clean, testable, production-grade ETL code at scale. Modern Data Pipelines: Experience with More ❯
or another Data Science based scripting language. Demonstrated experience and responsibility with data, processes, and building ETL pipelines. Experience with cloud data warehouses such as Snowflake, Azure Data Warehouse, AmazonRedshift, and Google BigQuery. Building visualizations using Power BI or Tableau Experience in designing ETL/ELT solutions, preferably using tools like SSIS, Alteryx, AWS Glue, Databricks, IBM More ❯
Proficiency in data analysis using Pandas, Numpy, SciPy etc. Experience with object oriented design, distributed systems architecture, performance tuning. Experience with designing and programming relational database such as MySQL, RedShift, Oracle SQL Server, or Postgres. Experience with AWS based system architecture covering S3, EKS, EC2, Batch, or Airflow etc. Experience with caching and messaging technologies such as, Redis, Hazelcast More ❯
/BI Engineering experience Excellent SQL skills Understanding of data warehousing, data modelling concepts and structuring new data tables Knowledge of cloud-based MPP data warehousing (e.g. Snowflake, BigQuery, Redshift) Nice to have Experience developing in a BI tool (Looker or similar) Good practical understanding of version control SQL ETL/ELT knowledge, experience with DAGs to manage script More ❯
such as: Hadoop, Kafka, Apache Spark, Apache Flink, object, relational and NoSQL data stores. Hands-on experience with big data application development and cloud data warehousing (e.g. Hadoop, Spark, Redshift, Snowflake, GCP BigQuery) Expertise in building data architectures that support batch and streaming paradigms Experience with standards such as JSON, XML, YAML, Avro, Parquet Strong communication skills Open to More ❯
Python and related ML libraries Strong background in applied machine learning, model development and data engineering Experience with cloud environments (Azure, AWS, GCP) and tools such as Spark, Hive, Redshift Demonstrated ability to lead cross-functional teams and mentor junior practitioners Ability to communicate complex technical concepts clearly to non-technical audiences Bonus Points For Participation in Kaggle or More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Awin
acceptable) Working knowledge of SQL and data modelling concepts Experience with BI tools (e.g., Power BI, Looker, Tableau) Familiarity with cloud data platforms such as Snowflake, BigQuery, or AWS Redshift Understanding of modern data architecture and APIs Our Offer Flexi-Week and Work-Life Balance: We prioritise your mental health and wellbeing, offering you a flexible four-day Flexi More ❯
Portsmouth, Hampshire, England, United Kingdom Hybrid / WFH Options
Computappoint
Mentor junior consultants Build strong customer relationships Support managed services as required Essential Requirements 3+ years consulting/managed services experience Solid Data Lake experience with AWS data solutions (Redshift, Glue, Athena, Lake Formation) Advanced AWS skills (EC2, S3, VPC, IAM, Lambda) Infrastructure as Code experience (CloudFormation/Terraform) AWS Solutions Architect Associate certification Strong communication and presentation abilities More ❯
CSS, Bootstrap/Reactstrap, and JavaScript frameworks such as React Experience with data warehousing technologies Familiarity with cloud platforms including AWS and/or Google Cloud Platform (e.g., BigQuery, Redshift) Exceptional written and verbal communication skills, including the ability to convey technical concepts to non-technical stakeholders Proficiency in technical writing and editing using Microsoft Office tools (Excel, Word More ❯
data-focused SRE, Data Platform, or DevOps role Strong knowledge of Apache Flink, Kafka, and Python in production environments Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further roles, please click More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
data-focused SRE, Data Platform, or DevOps role *Strong knowledge of Apache Flink, Kafka, and Python in production environments *Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.) *Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further roles, please click More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Rise Technical Recruitment Limited
data-focused SRE, Data Platform, or DevOps role*Strong knowledge of Apache Flink, Kafka, and Python in production environments*Hands-on AWS experience with AWS (Lambda, EMR, Step Functions, Redshift, etc.)*Comfortable with monitoring tools, distributed systems debugging, and incident response Reference Number: BBBH259303 To apply for this role or for to be considered for further roles, please click More ❯