a Data Architect or similar role, with a focus on AWS cloud environments. Strong knowledge of AWS services relevant to data architecture, such as AmazonRedshift, Amazon Athena, Amazon S3, AWS Glue, and AWS Lambda. Experience designing and implementing data lakes, data warehouses, and analytics solutions more »
Data Factory, Azure SQL Database, Azure Databricks, Azure Cosmos DB, Azure Synapse Analytics (formerly SQL Data Warehouse), etc. Familiarity with AWS services such as AmazonRedshift, AWS Glue, Amazon S3, Amazon RDS, Amazon EMR, etc. Basic understanding of GCP services like BigQuery, Cloud Dataflow, Cloud more »
Newcastle Upon Tyne, England, United Kingdom Hybrid / WFH Options
Seriös Group
Azure Data Factory, Azure Event Hubs, Azure Data Lake Storage, Azure Function Apps, Azure Synapse Analytics, AWS Glue, AWS S3, AWS Lambda Functions, AWS Redshift, Databricks, Snowflake, Google Big Query, Alteryx, SSIS, Informatica. Understanding of data warehouse and data lake principles. Demonstrable data modelling capabilities using either Kimball, Inmon more »
platforms. We specialise on using the latest frameworks, reference architectures and technologies using AWS, Azure and GCP. Essential Skills and Experience: * AWS (e.g., Athena, Redshift, Glue, EMR) * Strong AWS Data Solution Architect Experience on Data Related Projects * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ more »
building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will be a big plus. · Understanding of more »
These: Exposure to, and ideally experience with, modern data architectures (e.g. data lake, lake house, data mesh) and accompanying technologies (e.g. Azure Synapse, Snowflake, AmazonRedshift) Awareness of data governance and all surrounding legislation Interest in security and how to handle PII data Excellent written and verbal communication more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public more »
experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public more »
Better Placed Ltd - A Sunday Times Top 10 Employer in 2023!
or similar role Proficiency in SQL, Python, and ETL tools Solid understanding of data modeling and database design Familiarity with data warehousing technologies (e.g., Redshift, Snowflake) Strong analytical and problem-solving skills Experience with cloud platforms (AWS, Azure, or GCP) and big data frameworks (Hadoop, Spark) is a plus more »
Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing technologies such as Redshift, BigQuery, or Snowflake. > Knowledge of database systems such as MySQL, PostgreSQL, or MongoDB. > Excellent problem-solving skills and attention to detail. > Strong communication and more »
and reporting. Specialised in AWS cloud technologies for ETL, data warehouse, and data lake design. Hands-on experience with AWS services like EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. Capable of processing large volumes of structured and unstructured data on AWS. Familiarity with AWS best practices in data engineering, data more »
pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more. Familiarity with Agile methodologies, particularly Scrum, to support dynamic project management. Valuable contributions to open-source projects, reflecting your commitment more »
solutions. Strong software engineering experience with Python. Experience working on production-level microservices (Docker/Kubernetes) and cloud infrastructure. Experience working with Databases, Postgres, Redshift, Neo4j would be a plus. Skills Computer Science or Software Engineering degree/BSc or higher Experience leading a Machine Learning/MLOps team more »
Strong experience of Python Deep knowledge of NLP such as hugging face and spacy AWS - ETL Processes (Glue, Lambda etc.), ECS, S3, Database Solutions (Redshift, RDS) Vector Databases a big plus Familiar with Graph Data Structure and Algorithms Benefits: Competitive salary and benefits package, including health insurance, life insurance more »
database structure systems, data mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
able to lend hands-on support to the team working on and using the following technologies:Data Warehousing, Data Modelling, Database Design, ETL, AWS Redshift, SQL Server, Power BI, Cloud/DevOps AWS, Docker, Terraform, Bitbucket, Bamboo, Team City, Octopus CI/CD deploy pipeline, Agile, pair programming, code more »
SQL Agile/Lean/SAFe methodology experience Financial/collections industry experience Experience with cloud/AWS based solutions (e.g. RDS, Lambda, Snowflake, Redshift or S3) Desire to learn through training and self-discovery on a variety of emerging technologies External applicants will be required to perform a more »
using Python and the AWS Cloud Development Kit. You'll use a variety of services such as SNS, SQS and Lambda for event streams, AmazonRedshift, S3 and AWS Glue for data storage & reporting and Amazon SageMaker for our pioneering machine learning products. We are a growing … communication skills to work with our colleagues across ECS to help them use our services. Python and AWS Services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark more »
skills Containerisation experience (Dockers, Kubernetes) Cloud Computing experience ( GCP/AWS/Azure ) Strong preference for a Snowflake background but open to Bigquery or Redshift Unfortunately sponsorship cannot be provided for this role If you are interested please apply here or reach out to morgan.beck@xcede.com for more information more »
useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within Databricks or equivalent such more »
transform and load the datasets. Your Profile Key skills/knowledge/experience: Good understanding of AWS cloud data platform services: EC2, EMR, RDS, Redshift, Glue. Ability to work with object-oriented scripting languages: Python, Pyspark. Knowledge of data pipeline and workflow management and their tools such as Airflow. more »
Data modelling Principles & best practices and also needs prior experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a more »