a Data Architect or similar role, with a focus on AWS cloud environments. Strong knowledge of AWS services relevant to data architecture, such as AmazonRedshift, Amazon Athena, Amazon S3, AWS Glue, and AWS Lambda. Experience designing and implementing data lakes, data warehouses, and analytics solutions more »
processes using AWS services such as AWS Glue, AWS Lambda, and AWS Data Pipeline. Data Storage Management : Manage and optimize data storage solutions, including Amazon S3, AmazonRedshift, and AWS RDS. Data Quality and Validation : Ensure data quality and integrity through validation and cleansing techniques. Collaboration : Work … focus on AWS and cloud technologies. Technical Skills : Proficiency in programming languages such as Python, Java, or Scala. Experience with AWS services including S3, Redshift, Glue, Lambda, and RDS. Strong SQL skills for data manipulation and querying. Familiarity with ETL tools and processes. Understanding of data modeling and database more »
platforms. We specialise on using the latest frameworks, reference architectures and technologies using AWS, Azure and GCP. Essential Skills and Experience: * AWS (e.g., Athena, Redshift, Glue, EMR) * Strong AWS Data Solution Architect Experience on Data Related Projects * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ more »
building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will be a big plus. · Understanding of more »
data integration. You should also have a good understanding of data modelling, ETL processes, and data governance. Technical Skills: Proficiency in AWS Glue, AWS Redshift, and Python Experience with ETL processes, data integration, and data warehousing. Strong SQL skills Experience with Big Data technologies such as Hadoop, Spark, and more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Erin Associates
essential. Core skills for this Data Engineer role: Strong coding skills in Python & SQL Excellent experience in the AWS Cloud platform, including AWS Glue, Amazon DynamoDB for NoSQL database management, AmazonRedshift for data warehousing, and AWS Lambda Designing & developing ETL Pipelines Familiarity with big data technologies more »
experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Erin Associates
ETL and DataOps principles is essential. Core skills: Strong coding skills in Python & SQL Excellent experience in the AWS Cloud platform, including AWS Glue, Amazon DynamoDB for NoSQL database management, AmazonRedshift for data warehousing, and AWS Lambda Designing & developing ETL Pipelines Familiarity with big data technologies more »
Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing technologies such as Redshift, BigQuery, or Snowflake. > Knowledge of database systems such as MySQL, PostgreSQL, or MongoDB. > Excellent problem-solving skills and attention to detail. > Strong communication and more »
and improve existing data designs and data models, including relational table structures, using Unix Shell Scripting, Oracle, SQL Server, Azure SQL Data Warehouse, or AmazonRedshift Support and learn the associated scheduling logic for data pipelines using scheduling tool such as Autosys or Airflow Read and translate logical more »
Strong experience of Python Deep knowledge of NLP such as hugging face and spacy AWS - ETL Processes (Glue, Lambda etc.), ECS, S3, Database Solutions (Redshift, RDS) Vector Databases a big plus Familiar with Graph Data Structure and Algorithms Benefits: Competitive salary and benefits package, including health insurance, life insurance more »
database structure systems, data mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
able to lend hands-on support to the team working on and using the following technologies:Data Warehousing, Data Modelling, Database Design, ETL, AWS Redshift, SQL Server, Power BI, Cloud/DevOps AWS, Docker, Terraform, Bitbucket, Bamboo, Team City, Octopus CI/CD deploy pipeline, Agile, pair programming, code more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
using Python and the AWS Cloud Development Kit. You'll use a variety of services such as SNS, SQS and Lambda for event streams, AmazonRedshift, S3 and AWS Glue for data storage & reporting and Amazon SageMaker for our pioneering machine learning products. We are a growing … the communication skills to work with our colleagues across ECS to help them use our services. Python andAWS Services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such as pandas, Spark more »
skills Containerisation experience (Dockers, Kubernetes) Cloud Computing experience ( GCP/AWS/Azure ) Strong preference for a Snowflake background but open to Bigquery or Redshift Unfortunately sponsorship cannot be provided for this role If you are interested please apply here or reach out to morgan.beck@xcede.com for more information more »
pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more. Familiarity with Agile methodologies, particularly Scrum, to support dynamic project management. Valuable contributions to open-source projects, reflecting your commitment more »
pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more. Familiarity with Agile methodologies, particularly Scrum, to support dynamic project management. Valuable contributions to open-source projects, reflecting your commitment more »
Python (no data science experience required). Experience working on production-level microservices (Docker/Kubernetes) and cloud infrastructure. Experience working with Databases, Postgres, Redshift, Neo4j would be a plus. Why You Should Join The position will be at the intersection of data science and development operations. The candidate more »
Data modelling Principles & best practices and also needs prior experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a more »
would also be useful. You will be a Engineer with past experience with java, data, and infrastructure (devOps). Java, Python, PySpark Mechanisms: MongoDB, Redshift, AWS S3 Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Platforms: Creating data pipelines within Databricks or equivalent such as Jupyter more »
Swansea, Wales, United Kingdom Hybrid / WFH Options
BJSS
involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient more »
Cardiff, Wales, United Kingdom Hybrid / WFH Options
BJSS
involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient more »
Nottingham, England, United Kingdom Hybrid / WFH Options
BJSS
involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient more »