deliver scalable, cloud-native data platforms and pipelines. About the Role You’ll lead the design and implementation of cutting-edge data architectures using AWS technologies such as Redshift, S3, Lambda, Glue, StepFunctions, and Matillion. Your role will include liaising with stakeholders to shape technical solutions … part in mentoring other engineers and contributing to best practices in data engineering and DevOps. What You’ll Bring Strong hands-on experience with AWS data services – especially Redshift, Glue, and S3 Strong consulting experience - strong stakeholder management and experience leading large teams Heavy involvement in RFI + RFPs … data processing tools (Spark, Hadoop, MapReduce) Public sector experience Experience building APIs to serve data Familiarity with other public cloud platforms and data lakes AWS certifications (e.g. Solutions Architect Associate, Big Data Specialty) Interest or experience in Machine Learning If you're ready to bring your data engineering expertise More ❯
resolve data-related issues promptly to minimise disruption. Collaborate with various teams to align migration processes with organisational goals and regulatory standards. Proficiency in AWS ETL technologies, including Glue, Data Sync, DMS, StepFunctions, Redshift, DynamoDB, Athena, Lambda, RDS, EC2 and S3 Datalake, CloudWatch for building and … ensuring accuracy during migration processes. Effective communication skills to convey technical concepts and updates to diverse audiences, including non-technical stakeholders. Cloud certifications like AWS and Azure are preferred. Required Experience: Proven experience in data migration, data management, or ETL development. Experience working with ETL tools and database management More ❯
and mentoring more junior engineers in the team. Our Tech We have Python, TypeScript, and Javascript services running mostly on Lambda functions. We use StepFunctions extensively to orchestrate our workflows. Our persistence layer is largely Aurora (Postgres), DynamoDB, MemoryDB (Redis), and Timestream. However, we understand the dynamic … design, delivery and deployment of large scale, complex projects which are used heavily by thousands of people with high throughput. Using modern technology like AWS serverless infrastructure and events driven microservice architecture. Mentoring more junior members of the team to help support their growth and development and help scale More ❯
and mentoring more junior engineers in the team. Our Tech We have Python, TypeScript, and Javascript services running mostly on Lambda functions. We use StepFunctions extensively to orchestrate our workflows. Our persistence layer is largely Aurora (Postgres), DynamoDB, MemoryDB (Redis), and Timestream. However, we understand the dynamic … design, delivery and deployment of large scale, complex projects which are used heavily by thousands of people with high throughput. Using modern technology like AWS serverless infrastructure and events driven microservice architecture. Mentoring more junior members of the team to help support their growth and development and help scale More ❯
Java). Clinical data is stored in Postgres (Aurora) databases with large, hierarchical genomic data indexed in OpenSearch. Asynchronous data ingestion is performed using StepFunctions and Lambda, and the whole platform runs in AWS. We have a standard tool chain which includes Terraform for infrastructure-as-code … schema-design and evolution DevOps experience (CI/CD, Infrastructure as Code, operational monitoring and alerting) Experience in at least one major public cloud (AWS preferred but not essential) Strong interpersonal skills with a temperament that builds trust and connection within and across squads through open, honest communication Comfortable More ❯
reliability, and automation of our ML infrastructure and workflows Developing systems for both real-time (low-latency) and batch inference use cases Working with AWS and other cloud services (e.g. Fargate, ECS, Lambda, S3, SageMaker, StepFunctions) to deploy and monitor models in production Implementing and maintaining …/streaming contexts Proficiency working with distributed computing frameworks such as Apache Spark , Dask, or similar Experience with cloud-native ML deployment , particularly on AWS , using services like ECS, EKS, Fargate, Lambda, S3, and more Familiarity with orchestration and workflow scheduling tools such as Dagster , Airflow , or Prefect Knowledge More ❯
reliability, and automation of our ML infrastructure and workflows Developing systems for both real-time (low-latency) and batch inference use cases Working with AWS and other cloud services (e.g. Fargate, ECS, Lambda, S3, SageMaker, StepFunctions) to deploy and monitor models in production Implementing and maintaining …/streaming contexts Proficiency working with distributed computing frameworks such as Apache Spark , Dask, or similar Experience with cloud-native ML deployment , particularly on AWS , using services like ECS, EKS, Fargate, Lambda, S3, and more Familiarity with orchestration and workflow scheduling tools such as Dagster , Airflow , or Prefect Knowledge More ❯
and impactful for our customers." Technical Expertise You will have 5+ years of experience in Data Engineering, with a focus on cloud platforms (Azure, AWS, GCP). You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog). You have extensive experience in ETL …/ELT development and data pipeline orchestration (Databricks Workflows, DLT, Airflow, ADF, Glue, StepFunctions). You're proficient in SQL and Python, using them to transform and optimize data. You know your way around CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation, Bicep). You More ❯
projects are the norm, our projects are fast-paced, typically 2 to 4 months long. Most are delivered using Apache Spark/Databricks on AWS/Azure and require you to directly manage the customer relationship alone or in collaboration with a Project Manager. Additionally, at this seniority level … it take to fit the bill? Technical Expertise You (ideally) have 5+ years of experience in Data Engineering , with a focus on cloud platforms (AWS, Azure, GCP); You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog); You have extensive experience in ETL/… ELT development and data pipeline orchestration (e.g., Databricks Workflows, DLT, Airflow, ADF, Glue, and StepFunctions); You're proficient in SQL and Python , using them to transform and optimize data like a pro; You know your way around CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation More ❯
Strong understanding of ServiceNow modules & processes underpinning; ITSM, PA, CSDM,CMDB, Employee Centre & Integration Hub (REST & SOAP web services & API integrations). Familiarity with AWS services related to service operations, such as AWS Lambda, CloudFormation, and Step Functions. Experience with DevOps practices, CI/CD pipelines, and More ❯
coaching and developing engineers, and improving the overall maintainability of these systems. You will need to drive innovations and think big to bring in step function changes from the current status quo. A day in the life: You will need to engage with senior engineers to review key designs More ❯
functionally to influence partner teams. Lead cross-functional projects across multiple geographies working with senior business leaders on major initiatives. Drive adoption of services & step-function improvement in Seller Experience. Drive marketing and communication strategy of the program, as well as track and monitor performance. BASIC QUALIFICATIONS Bachelor's More ❯
this role is both strategic and hands-on. Consult and coach partners to grow and achieve improved results on Amazon. Drive adoption of services & step-function improvement in Seller Experience. Drive marketing and communication strategy of the program, as well as track and monitor performance. Qualifications - Bachelor's degree More ❯
integrating with third party services. Ability to read and review code, paired programming, and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batch processing/ETL using Glue Jobs, AWS Lambda, and StepMore ❯
integrating with third party services. Ability to read and review code, paired programming, and debugging code related performance issues, SQL tuning, etc. Experience with AWS services such as S3, RDS, Aurora, NoSQL, MSK (Kafka). Experience with batch processing/ETL using Glue Jobs, AWS Lambda, and StepMore ❯
coaching and developing engineers, and improving the overall maintainability of these systems. You will need to drive innovations and think big to bring in step function changes from the current status quo. A day in the life You will need to engage with stakeholders and engineers from your team More ❯
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our … team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. … Computing? Amazon Web Services is looking for a highly motivated Data Scientist to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers, Business Intelligence Engineers More ❯
Working with one of our consultancy clients to secure an AWS DevOps Engineer on an inital 6 month contract. This role sits with a government departent so will require active SC clearance. Key points for this role: Commit to working on-site in Leeds at least two days a … week. Possess active Security Clearance. Demonstrate strong experience with AWS, including deploying services such as ECS, S3, Lambda, SQS, and Step Functions. Develop and sustain an Infrastructure as Code (IaC) codebase using tools like CloudFormation, Jenkins, and Groovy, while applying best practices and standards in Cloud and DevOps More ❯