and problem-solving, ideal for someone who wants to work in a fast-moving product environment with plenty of autonomy. The Role Design, build and maintain AWS infrastructure (EC2, S3, Lambda, SQS, Glacier, Elastic IPs) Deploy and manage Docker/Kubernetes clusters for scalable applications Build queue-based pipelines using RabbitMQ and AWS SQS Implement IaC with Terraform and More ❯
to align migration processes with organisational goals and regulatory standards. Proficiency in AWS ETL technologies, including Glue, Data Sync, DMS, Step Functions, Redshift, DynamoDB, Athena, Lambda, RDS, EC2 and S3 Datalake, CloudWatch for building and optimizing ETL pipelines and data migration workflows. Working knowledge of Azure data engineering tools, including ADF (Azure Data Factory), Azure DB, Azure Synapse, Azure More ❯
NoSQL) to AWS cloud. Proven experience in on-prem to AWS data migration using AWS-native tools such as DMS, Glue, DataSync, Step Functions, Redshift, Athena, Lambda, RDS, EC2, S3 Data Lake, and CloudWatch. Strong knowledge of data extraction, transformation, and loading (ETL) processes, leveraging tools such as Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi, or scripting languages (Python, PySpark More ❯
company seeking to hire a Data Engineering Manager to play a key role in their data operations and business intelligence initiatives. Key Responsibilities: Design & maintain AWS BI infrastructure (Redshift, S3, EMR) using Terraform and IaC best practices. Develop CI/CD pipelines (Jenkins, GitHub Actions) to automate ETL and Power BI code deployments. Manage environments (Dev, QA, UAT) and … AWS services. Monitor performance & costs in AWS, driving optimisation and efficiency. Champion automation & innovation through new tools, frameworks, and cloud-native solutions. Key Skills: AWS Cloud: Expert in Redshift, S3, Lambda, EMR, and IaC (Terraform/CloudFormation); strong understanding of big data architecture and performance optimisation. CI/CD & Automation: Skilled in Jenkins, GitHub Actions, and Python scripting for More ❯
retail environment. Share our values of being: Wise, Focused, Genuine, Eager, Together Proficient in SQL and Python with Experience with ETL workflows Experience with cloud-based data environments (AWS: S3, Lambda, ECS) Bachelor’s degree or equivalent qualification or equivalent experience in Data Science, Computer Science, Statistics, or a related field. Effective communication (written and oral), with attention to More ❯
capacity. Proven experience designing and governing complex, multi-cloud or hybrid solutions. Deep technical expertise in cloud, data, integration, and security architecture. Strong understanding of enterprise data platforms (Databricks, S3, Redshift), integration patterns (API Gateway, AppFlow, Logic Apps), and cloud-native services. Demonstrable leadership in delivering large-scale transformation or digital programmes. Proficiency in DevOps tooling (Terraform, GitHub, CodePipeline More ❯
coordinate system fixes, enhancements, and improvements. Essential Skills & Experience: A minimum of 3 years in AWS-based BI/Data Engineering production support. AWS BI Stack: Redshift, Glue, Airflow, S3, Step Functions. Experience in data modeling, ETL pipelines, and reverse engineering. Proficiency in Power BI (preferred), Business Objects, or Qlik. Strong SQL scripting, optimisation, and troubleshooting capabilities. Previous experience More ❯