Liverpool, England, United Kingdom Hybrid / WFH Options
Intuita Consulting
dbt or Google Cloud Platform or related technologies. • Experience with other cloud platforms (e.g. AWS, Azure, Snowflake) and data warehouse/lakehouse technologies (e.g. Redshift, Databricks, Synapse) • Knowledge of distributed big data technologies. • Proficiency in Python. • Familiarity with data governance and compliance frameworks. Your characteristics as a Consultant will More ❯
to detail with strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. Experience with cloud data platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery). Knowledge of data governance frameworks and data quality management. Competence in data modelling and database design techniques. Experience working in agile More ❯
platforms and cloud data solutions . Hands-on experience designing data solutions on Azure (e.g., Azure Data Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with More ❯
large-scale datasets. Jupyter, Databricks, or notebooks-based workflows for experimentation. Data Access & Engineering Collaboration : Comfort working with cloud data warehouses (e.g., Snowflake, Databricks, Redshift, BigQuery) Familiarity with data pipelines and orchestration tools like Airflow Work closely with Data Engineers to ensure model-ready data and scalable pipelines. Nice More ❯
optimisation of these. Their ideal candidate would have 10+ years experience in Data Engineering/Architecture and have good knowledge within: Data Warehousing (Snowflake, Redshift, BigQuery) ETL (Data Fabric, Data Mesh) DevOps (IaC, CI/CD, Containers) Leadership/Line Management Consulting/Client Facing Experience In return they More ❯
building high-throughput backend systems Experience with BI/reporting engines or OLAP stores Deep Ruby/Rails & ActiveRecord expertise Exposure to ClickHouse/Redshift/BigQuery Event-driven or stream processing (Kafka, Kinesis) Familiarity with data-viz pipelines (we use Highcharts.js) AWS production experience (EC2, RDS, IAM, VPC More ❯
join their team and assist with the continued scaling and optimisation of these. Their ideal candidate would have good knowledge within: Data Warehousing (Snowflake, Redshift, BigQuery) ETL (Data Fabric, Data Mesh) DevOps (IaC, CI/CD, Containers) Consulting/Client Facing Experience In return they would be offering Uncapped More ❯
Liverpool, Merseyside, North West, United Kingdom Hybrid / WFH Options
Cathcart Technology
process large volumes of data using native AWS services. The main tools you will be using day to day are: ** Python ** SQL ** AWS, Glue ** Redshift You: Our customer is looking for someone who is well versed in the above tools & technologies. They also need someone who understands the principles More ❯
primary, secondary, and tertiary sources. Cloud, Infrastructure, and Platform Engineering: Develop and deploy data workflows on AWS or GCP , using services such as S3, Redshift, Pub/Sub, or BigQuery. Containerize data processing tasks using Docker , orchestrate with Kubernetes , and ensure production-grade deployment. Collaborate with platform teams to … ensure scalability, resilience, and observability of data pipelines. Database Engineering : Write and optimize complex SQL queries on relational (Redshift, PostgreSQL) and NoSQL (MongoDB) databases. Work with ELK stack (Elasticsearch, Logstash, Kibana) for search, logging, and real-time analytics. Support Lakehouse architectures and hybrid data storage models for unified access More ❯
AWS. Collaborate with engineering teams to ensure a smooth transition and system stability. Languages & Scripting: Python (primary scripting language for Lambda functions) SQL (BigQuery, Redshift) R (not essential but beneficial for interpreting existing scripts) AWS services including Lambda, API Gateway, S3, CloudWatch, Kinesis Firehose Terraform for infrastructure as code … AWS. Collaborate with engineering teams to ensure a smooth transition and system stability. Languages & Scripting: Python (primary scripting language for Lambda functions) SQL (BigQuery, Redshift) R (not essential but beneficial for interpreting existing scripts) Cloud & Infrastructure: AWS services including Lambda, API Gateway, S3, CloudWatch, Kinesis Firehose Terraform for infrastructure More ❯
platforms) and specifically in Big Data processing services (Apache Spark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF etc. Excellent consulting experience and ability … Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apache spark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF Designing Databricks based solutions for Azure/ More ❯
At Travelex we are developing modern data technology and data products. Data is central to the way we define and sell our foreign currency exchange products. Our relationship with our customers is deeply data-driven. The data engineering manager ( DEM More ❯
data sources. Key Responsibilities: - Develop interactive Power BI reports and dashboards tailored to business needs - Connect to and model data from AWS services (S3, Redshift) - Collaborate with internal data teams and business stakeholders to interpret requirements - Optimise Power BI performance and ensure best practices in data governance - Deliver documentation … where required Skills and Experience - Proven expertise with Power BI, including DAX, Power Query, and data modelling - Ideally hands-on experience with AWS (especially Redshift, and S3) - Strong SQL and data transformation skill - Ability to work independently and manage deliverables in a fast-paced environment - Excellent communication and stakeholder More ❯