Arlington, Virginia, United States Hybrid / WFH Options
Amazon
DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in … Engineering, related field, or equivalent experience - 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift - Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including coding standards, reviews, source control … a track record of working with large datasets and extracting value from them - Experience leading large-scale data engineering and analytics projects using AWS technologies like Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda, as well as experience with non-relational databases and implementing data governance solutions Amazon is an equal opportunity employer and does not discriminate More ❯
complex business requirements and drive decision-making. Your skills and experience Proficiency with AWS Tools: Demonstrable experience using AWS Glue, AWS Lambda, Amazon Kinesis, AmazonEMR , Amazon Athena, Amazon DynamoDB, Amazon Cloudwatch, Amazon SNS and AWS Step Functions. Programming Skills: Strong experience with modern programming languages such More ❯
London, England, United Kingdom Hybrid / WFH Options
RED Global
the following: Proven experience as an AWS Data SME or AWS Data Engineer , working extensively with AWS cloud services. Expertise in AWS Redshift, Glue, Lambda, Terraform, Kinesis, Athena, and EMR . Strong ETL/ELT development and data warehousing experience. Proficiency in Python, Java, or Scala for data processing and automation. In-depth knowledge of SQL, Apache Kafka, and … Amazon RDS . Experience in data security, governance, and compliance best practices . Familiarity with CI/CD pipelines, DevOps methodologies, and monitoring/logging best practices . Strong problem-solving skills , with the ability to work in a collaborative and fast-paced environment . Preferred Qualifications: AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect/ More ❯
Milton Keynes, England, United Kingdom Hybrid / WFH Options
Santander
to be successful in this role: Experience developing, testing, and deploying data pipelines, data lakes, data warehouses, and data marts using ideally AWS services such as S3, Glue, Athena, EMR, Kinesis, and Lambda Understanding of the principles behind designing and implementing data lake, lake house and/or data mesh architecture Problem solving skill with basic knowledge of the … with team members, stakeholders and end users conveying technical concepts in a comprehensible manner Skills across the following data competencies: SQL (AWS Athena/Hive/Snowflake) Hadoop/EMR/Spark/Scala Data structures (tables, views, stored procedures) Data Modelling - star/snowflake Schemas, efficient storage, normalisation Data Transformation DevOps - data pipelines Controls - selection and build Reference More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
PA Consulting
ll have experience working in teams to design, build, and maintain large scale data solutions and applications using AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. Your team members will look to you as a trusted expert and will expect you to define the end-to-end software development lifecycle More ❯
data pipelines using Apache Spark. Leverage Python to develop robust ETL workflows and data-processing scripts. Architect, deploy, and manage cloud-based data systems on AWS (e.g., S3, Redshift, EMR). Collaborate with product and analytics teams to ensure data availability, quality, and accessibility. Monitor and troubleshoot system performance, ensuring scalability and reliability. Qualifications Experience: 2-4 years as More ❯
proficiency in SQL, ETL processes and database management systems (e.g., MySQL, PostgreSQL, MongoDB). Hands-on experience with AWS services for data processing, storage and analysis (e.g., S3, Redshift, EMR, Glue). Familiarity with programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark More ❯
data requirements, and define analytics goals. Develop and implement data analysis strategies using AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. Design and build robust data pipelines and ETL processes to extract, transform, and load data from diverse sources into AWS for analysis. Apply advanced … statistical and machine learning techniques to perform predictive and prescriptive analyses, clustering, segmentation, and pattern recognition. Identify key metrics, develop meaningful KPIs, and build dashboards and visualisations using Amazon QuickSight to enable data-driven decision-making. Conduct exploratory data analysis to uncover trends, patterns, and insights that inform product enhancements, user behaviour, and engagement strategies. Collaborate with data … preferably in a cloud-based environment using AWS analytics services. 3.Strong proficiency in AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency in statistical analysis, data mining, and machine learning techniques. 6.Proficiency in programming More ❯
London, England, United Kingdom Hybrid / WFH Options
Luupli
data requirements, and define analytics goals. Develop and implement data analysis strategies using AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. Design and build robust data pipelines and ETL processes to extract, transform, and load data from diverse sources into AWS for analysis. Apply advanced … statistical and machine learning techniques to perform predictive and prescriptive analyses, clustering, segmentation, and pattern recognition. Identify key metrics, develop meaningful KPIs, and build dashboards and visualisations using Amazon QuickSight to enable data-driven decision-making. Conduct exploratory data analysis to uncover trends, patterns, and insights that inform product enhancements, user behaviour, and engagement strategies. Collaborate with data … preferably in a cloud-based environment using AWS analytics services. 3.Strong proficiency in AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency in statistical analysis, data mining, and machine learning techniques. 6.Proficiency in programming More ❯
London, England, United Kingdom Hybrid / WFH Options
Luupli
data requirements, and define analytics goals. Develop and implement data analysis strategies using AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. Design and build robust data pipelines and ETL processes to extract, transform, and load data from diverse sources into AWS for analysis. Apply advanced … statistical and machine learning techniques to perform predictive and prescriptive analyses, clustering, segmentation, and pattern recognition. Identify key metrics, develop meaningful KPIs, and build dashboards and visualisations using Amazon QuickSight to enable data-driven decision-making. Conduct exploratory data analysis to uncover trends, patterns, and insights that inform product enhancements, user behaviour, and engagement strategies. Collaborate with data … preferably in a cloud-based environment using AWS analytics services. 3.Strong proficiency in AWS analytics services, such as Amazon Redshift, Amazon Athena, AmazonEMR, and Amazon QuickSight. 4.Solid understanding of data modelling, ETL processes, and data warehousing concepts. 5.Proficiency in statistical analysis, data mining, and machine learning techniques. 6.Proficiency in programming More ❯
Belfast, Northern Ireland, United Kingdom Hybrid / WFH Options
JR United Kingdom
a big data architecture. Work with technologies such as Python, Java, Scala, Spark, and SQL to extract, clean, transform, and integrate data. Build scalable solutions using AWS services like EMR, Glue, Redshift, Kinesis, Lambda, and DynamoDB. Process large volumes of structured and unstructured data, integrating multiple sources to create efficient data pipelines. Collaborate with engineering teams to integrate data More ❯
London, England, United Kingdom Hybrid / WFH Options
BMLL
Experience with proactive management and team ownership of cloud infrastructure Beneficial Experience: AWS certifications Familiarity with SIEM solutions and Security Incident Management Cybersecurity awareness or certification Data engineering familiarity (EMR, ETL) Coaching or mentoring experience Key Behaviours: Excellent problem-solving skills Flexibility to experiment and adapt quickly based on results Strong team collaboration and communication skills Proactive ownership of More ❯
Maryland Line, Maryland, United States Hybrid / WFH Options
eSimplicity Inc
including Active Directory, Okta, OAuth, SAML Experience working with CMS, Medicare/Medicaid, or healthcare/insurance data environments. Familiarity with AWS data and analytics tools such as S3, EMR, IAM, Quicksight, SageMaker, Hive, Ranger/Knox, Airflow, Ambari, Jupyter, Zepelin, and Lustre. Experience with Jenkins, GitHub Actions, IaC best practices, and cloud monitoring/alerting solutions. CMS and More ❯
Cambourne, England, United Kingdom Hybrid / WFH Options
Remotestar
Apache NiFi. Experience with containerization and orchestration tools like Docker and Kubernetes. Knowledge of time-series or analytics databases such as Elasticsearch. Experience with AWS services like S3, EC2, EMR, Redshift. Familiarity with data monitoring and visualization tools such as Prometheus and Grafana. Experience with version control tools like Git. Understanding of Data Warehouse and ETL concepts; familiarity with More ❯
London, England, United Kingdom Hybrid / WFH Options
Ziff Davis
to join our Engineering team in London. Our team designs, develops, and operates all data systems across the company, including ETL processes, data warehouses, real-time click streams, and EMR to support content personalization for our users. We are looking for someone to help us evolve our data architecture and technology stack. Our primary programming language is Python, although More ❯
London, England, United Kingdom Hybrid / WFH Options
amber labs
data integrity and security. Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Requirements: AWS: S3, Lambda, EMR, SMS, SQS, and additional services related to data infrastructure Terraform Databricks Data Lake, Warehouse, Lakehouse architecture and design Python/Pyspark Data platforms and notebooks: Jupyter, Databricks, Azure Gitlab More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, Apache Iceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is protected Clean and preprocess data to More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, Apache Iceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is protected Clean and preprocess data to More ❯
ETL/ELT tooling including DBT and Airflow. Experience with CI/CD and infrastructure-as-code, within AWS cloud. Also desirable - familiarity with AWS' data tools such as EMR, MWAA, MSK You are Someone who loves collaboration - our teams are cross functional and you'll be working with other engineers, team leads and product managers to deliver great More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Skills for Care
and Pyspark AWS glue jobs assembled into Step functions Pydeequ for data quality testing Amazon Athena for querying data Hosted on AWS, using S3, Glue, Step functions, EMR, and Athena Terraform for Infrastructure as Code Our code is open source and we use Git and GitHub for source control If you are interested in this role, please More ❯
London, England, United Kingdom Hybrid / WFH Options
Hopecompass
role. Experience of deploying & managing cloud infrastructure for Big Data applications. Hands-on experience of working with AWS services, including but not limited to EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, etc. Good experience in setting up reliable & scalable cloud networking that is highly secured. Extensive experience with Terraform, including writing, testing, and deploying Terraform scripts. Strong understanding More ❯
Reston, Virginia, United States Hybrid / WFH Options
CGI
ability to optimize data integration pipelines (ETL/ELT), implementing Change Data Capture (CDC) for Analytics projects, and drive robust data governance. Skilled in leveraging S3, Redshift, AWS Glue, EMR, Azure Data Lake, and Power BI to deliver secure, high-performance solutions and self-service BI ecosystems. Skilled in leveraging Apache Airflow, Apache Flink and other Data tools Experienced More ❯
ETL development, infrastructure optimization, and deployment of scalable solutions for our programmatic advertising platform. What You'll Do Operate and evolve our big data stack on AWS (including MSK, EMR, Kinesis, Glue, and MWAA) Design, implement, and optimize ETL pipelines for large-scale data processing Collaborate with the Data Science team to prepare and manage datasets for model training … cloud Participate in R&D activities: evaluating new tools, improving system performance, and ensuring scalability What We're Looking For Knowledge of AWS data services: MSK (Kafka: message broker), EMR (Spark/Flink), Glue, Kinesis, MWAA (Airflow) Basic understanding of cloud environments (preferably AWS) Basic experience with infrastructure-as-code tools (e.g., Terraform, AWS CDK, or CloudFormation) Strong interest More ❯