teams to understand business objectives, identify data requirements, and define analytics goals. Develop and implement data analysis strategies using AWS analytics services, such as AmazonRedshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. Design and build robust data pipelines and ETL processes to extract, transform … techniques to perform predictive and prescriptive analyses, clustering, segmentation, and pattern recognition. Identify key metrics, develop meaningful KPIs, and build dashboards and visualisations using Amazon QuickSight to enable data-driven decision-making. Conduct exploratory data analysis to uncover trends, patterns, and insights that inform product enhancements, user behaviour, and … 2.Proven experience as a Data Scientist, preferably in a cloud-based environment using AWS analytics services. 3.Strong proficiency in AWS analytics services, such as AmazonRedshift, Amazon Athena, Amazon EMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency More ❯
server) Data Integration Tools: Knowledge of platforms like Airflow, Apache NiFi, or Talend. Data Storage and Modelling: Experience with data warehousing tools (e.g. Snowflake, Redshift, BigQuery) and schema design. Version Control and CI/CD: Familiarity with Git, Docker, and CI/CD pipelines for deployment. Experience 2+ years More ❯
of a wide range of data sources, SQL, NoSQL and Graph. - A proven track record of infrastructure delivery on any data platform (Snowflake, Elastic, Redshift, Data Bricks, Splunk, etc). - Strong and demonstrable experience writing regular expressions and/or JSON parsing, etc. - Strong experience in log processing (Cribl More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
iO Associates - UK/EU
and optimise Kafka clusters for real-time data streaming. Oversee and maintain containerized workloads using EKS (Kubernetes on AWS) . Support data infrastructure, including AmazonRedshift for analytics and storage. Implement security best practices and compliance controls within cloud environments. Collaborate with development, security, and operations teams to … such as Sentry and Grafana). Design procedures for system troubleshooting and maintenance. Required Skills & Experience Strong experience with AWS services , particularly EKS, Kafka, Redshift, and CloudWatch . Hands-on experience with Kubernetes and container orchestration. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, or similar More ❯
server) Data Integration Tools: Knowledge of platforms like Airflow, Apache NiFi, or Talend. Data Storage and Modelling: Experience with data warehousing tools (e.g. Snowflake, Redshift, BigQuery) and schema design. Version Control and CI/CD: Familiarity with Git, Docker, and CI/CD pipelines for deployment. Experience 2+ years More ❯
platforms and cloud data solutions . Hands-on experience designing data solutions on Azure (e.g., Azure Data Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with More ❯
platforms and cloud data solutions . Hands-on experience designing data solutions on Azure (e.g., Azure Data Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with More ❯
platforms and cloud data solutions . Hands-on experience designing data solutions on Azure (e.g., Azure Data Lake, Synapse, Data Factory) and AWS (e.g., Redshift, Glue, S3, Lambda) . Strong background in data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with More ❯
solutions to support business growth. Responsibilities: Work with teams to understand objectives and define analytics goals. Develop data analysis strategies using AWS services like AmazonRedshift, Athena, EMR, and QuickSight. Design data pipelines and ETL processes for data extraction and transformation. Apply statistical and machine learning techniques for … predictive analysis, clustering, and pattern recognition. Develop KPIs and dashboards using Amazon QuickSight for decision-making. Perform exploratory data analysis to identify trends and insights. Collaborate on data architecture, quality, and governance in AWS. Requirements: Bachelor's or master's in Computer Science, Statistics, Mathematics, or related fields. Experience … as a Data Scientist, preferably with AWS analytics services. Proficiency in AWS tools like Redshift, Athena, EMR, QuickSight. Understanding of data modeling, ETL, and data warehousing. Skills in statistical analysis, machine learning, and programming languages such as Python, R, or Scala. Experience with SQL, NoSQL, and data visualization tools. More ❯
and support business growth. Responsibilities Collaborate with teams to understand business objectives and define analytics goals. Develop data analysis strategies using AWS services like AmazonRedshift, Athena, EMR, and QuickSight. Design and build data pipelines and ETL processes from diverse sources into AWS. Apply statistical and machine learning … techniques for predictive and segmentation analyses. Create dashboards and KPIs with Amazon QuickSight for decision-making. Perform exploratory data analysis to identify trends and insights. Work with data engineers to optimize data architecture, quality, and governance in AWS. Requirements Bachelor's or master's degree in Computer Science, Statistics … Mathematics, or related field. Experience as a Data Scientist in a cloud environment, preferably with AWS. Proficiency in AWS analytics tools such as Redshift, Athena, EMR, QuickSight. Understanding of data modeling, ETL, and data warehousing concepts. Skills in statistical analysis, data mining, and machine learning. Proficiency in Python, R More ❯
Good-to-Have) · Familiarity with DevOps tools (CI/CD, GitHub). · Experience working in Linux environments. · Exposure to AWS technologies (e.g., S3, Lambda, Redshift). · Background in the financial/investment industry (a plus). #J-18808-Ljbffr More ❯
platforms Experience in writing efficient SQL's, implementing complex ETL transformations on big data platform. Experience in a Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.) Experience in data quality testing; adept at writing test cases and scripts, presenting and resolving data issues Experience with Databricks, Snowflake, Iceberg More ❯
Glasgow, Scotland, United Kingdom Hybrid / WFH Options
Capgemini
to deliver business and customer outcomes. AWS Data Product Development: Lead the development of cloud-native data products using AWS services such as S3, Redshift, Glue, Lambda, and DynamoDB, aligning with business needs. Technical Leadership & Advisory: Act as a trusted AWS expert, advising clients on cloud migration, data strategy … experience in designing and implementing cloud-based data architectures, ETL pipelines, and cloud-native applications, using services such as AWS EC2, S3, Lambda, Glue, Redshift, RDS, IAM, and KMS. Leadership & Stakeholder Management – Ability to engage with C-level executives (CDOs, CTOs, CIOs), lead cross-functional teams, and drive technical More ❯
use cases. Develop data models and Data Lake designs around stated use cases to capture KPIs and data transformations. Identify relevant AWS services - on Amazon EMR, Redshift, Athena, Glue, Lambda, to design an architecture that can support client workloads/use-cases; evaluate pros/cons among the More ❯
optimisation of these. Their ideal candidate would have 10+ years experience in Data Engineering/Architecture and have good knowledge within: Data Warehousing (Snowflake, Redshift, BigQuery) ETL (Data Fabric, Data Mesh) DevOps (IaC, CI/CD, Containers) Leadership/Line Management Consulting/Client Facing Experience In return they More ❯
optimisation of these. Their ideal candidate would have 10+ years experience in Data Engineering/Architecture and have good knowledge within: Data Warehousing (Snowflake, Redshift, BigQuery) ETL (Data Fabric, Data Mesh) DevOps (IaC, CI/CD, Containers) Leadership/Line Management Consulting/Client Facing Experience In return they More ❯
optimisation of these. Their ideal candidate would have 10+ years experience in Data Engineering/Architecture and have good knowledge within: Data Warehousing (Snowflake, Redshift, BigQuery) ETL (Data Fabric, Data Mesh) DevOps (IaC, CI/CD, Containers) Leadership/Line Management Consulting/Client Facing Experience In return they More ❯
building high-throughput backend systems Experience with BI/reporting engines or OLAP stores Deep Ruby/Rails & ActiveRecord expertise Exposure to ClickHouse/Redshift/BigQuery Event-driven or stream processing (Kafka, Kinesis) Familiarity with data-viz pipelines (we use Highcharts.js) AWS production experience (EC2, RDS, IAM, VPC More ❯
building high-throughput backend systems Experience with BI/reporting engines or OLAP stores Deep Ruby/Rails & ActiveRecord expertise Exposure to ClickHouse/Redshift/BigQuery Event-driven or stream processing (Kafka, Kinesis) Familiarity with data-viz pipelines (we use Highcharts.js) AWS production experience (EC2, RDS, IAM, VPC More ❯
building high-throughput backend systems Experience with BI/reporting engines or OLAP stores Deep Ruby/Rails & ActiveRecord expertise Exposure to ClickHouse/Redshift/BigQuery Event-driven or stream processing (Kafka, Kinesis) Familiarity with data-viz pipelines (we use Highcharts.js) AWS production experience (EC2, RDS, IAM, VPC More ❯
with AI/ML approaches and frameworks such as RAG, LangChain, TensorFlow, and PyTorch. Exposure to LLMs from model families such as Anthropic, Meta, Amazon, and OpenAI. Familiarity with tools and packages like Pandas, NumPy, scikit-learn, Plotly/Matplotlib, and Jupyter Notebooks. Knowledge of ML-adjacent technologies, including … We implement GraphQL and RESTful APIs using NodeJS and Python Our backend services are implemented in C#/.NET or Typescript/NodeJS DynamoDB, Redshift, Postgres, Elasticsearch, and S3 are our go to data stores We run our ETL data pipelines using Python Equal Opportunities We are an equal … United Kingdom 3 weeks ago Python and Kubernetes Software Engineer - Data, AI/ML & Analytics Edinburgh, Scotland, United Kingdom 3 months ago Applied Scientist, Amazon Ads - Creative X Edinburgh, Scotland, United Kingdom 3 days ago Edinburgh, Scotland, United Kingdom 3 hours ago Edinburgh, Scotland, United Kingdom 1 month ago More ❯
drive customer loyalty? Expect to deep dive into customer segmentation, lifetime value, cohort behaviour , and funnel performance -all powered by a strong modern stack: Redshift, SQL, Python, and Looker . You'll thrive in this role if you: Love solving business problems with data Have strong SQL and some More ❯
drive customer loyalty? Expect to deep dive into customer segmentation, lifetime value, cohort behaviour , and funnel performance -all powered by a strong modern stack: Redshift, SQL, Python, and Looker . You'll thrive in this role if you: Love solving business problems with data Have strong SQL and some More ❯
analysis, validation, and documentation Build automation to reduce dependencies on manual data pulls etc. Minimum Requirements: 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience … a database or data warehouse and scripting experience (Python) to process data for modeling Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to More ❯
pipelines, orchestration and modelling. Lead the team in building and maintaining robust data pipelines, data models, and infrastructure using tools such as Airflow, AWS Redshift, DBT and Looker.Ensuring the team follows agile methodologies to improve delivery cadence and responsiveness. Contribute to hands-on coding, particularly in areas requiring architectural … to foster team growth and development Strong understanding of the data engineering lifecycle, from ingestion to consumption Hands-on experience with our data stack (Redshift, Airflow, Python, DVT, MongoDB, AWS, Looker, Docker) Understanding of data modelling, transformation, and orchestration best practices Experience delivering both internal analytics platforms and external More ❯