infrastructure security practices, including data privacy, model security, and compliance standards like GDPR and SOC 2. Familiarity with AWS big data tools (Redshift, Glue, EMR) for processing large datasets to support machine learning models. Preferred Qualifications: AWS Certified Machine Learning - Specialty or other relevant certifications. Experience with machine learning More ❯
experience as a Data Engineer. Demonstrated experience with AWS cloud services, including long-term storage options, and cloud-based database services (e.g., Databricks or EMR). Proficiency in SQL database structures and mapping between SQL databases. Experience with large-scale data migration efforts. Experience with database architecture, performance design More ❯
and frontend engineering. Hands-on experience with AWS Cloud-based application development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Solid engineering background in machine learning, deep learning, and neural networks. Experience with containerized stacks using Kubernetes or ECS for development, deployment More ❯
data stack, including (but not limited to) spark/hadoop, kafka, Aerospike/Dynamodb Experience with AWS tech stack, including but not limited to EMR, Athena, EKS Expert knowledge of multi-threading, memory model, etc. Understanding of database fundamentals and MySQL knowledge Experience with CICD tools such as Jenkins More ❯
Reston, Virginia, United States Hybrid / WFH Options
ICF
and the Spark Dataset API 3 years of experience in SQL development, SQL analytics and tuning skills 2 years of experience with AWS services (EMR, Redshift, Code build, Lamba and ECS) 2 years of experience with Git, Github and Confluence/Jira Any amount of experience with Medicare and More ❯
principles, methodologies, and tools, including GitLab CI/CD and Jenkins Experience with distributed data and computing tools, including Spark, Databricks, Hadoop, Hive, AWS EMR, or Kafka Experience leading a team of AI and ML engineers, researchers, and data scientists to develop and deploy advanced AI and ML technologies More ❯
Arlington, Virginia, United States Hybrid / WFH Options
Amazon
DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements … experience - 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift - Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including … with large datasets and extracting value from them - Experience leading large-scale data engineering and analytics projects using AWS technologies like Redshift, S3, Glue, EMR, Kinesis, Firehose, and Lambda, as well as experience with non-relational databases and implementing data governance solutions Amazon is an equal opportunity More ❯
and Spark. -Knowledge of ETL processes utilizing Linux shell scripting, Perl, Python, and Apache Airflow. -Experience with AWS services such as CloudWatch, CloudTrail, ELB, EMR, KMS, SQS, SNS, and Systems Manager. -Experience in supporting, maintaining, and migrating JavaFX applications to modern cloud-native solutions. -Strong decision-making skills and More ❯
Fleet, England, United Kingdom Hybrid / WFH Options
Experis Careers
implementation of complex data pipelines and ETL/ELT processes using cloud-native technologies (e.g., AWS Glue, AWS Lambda, AWS S3, AWS Redshift, AWS EMR). * Develop and maintain data quality checks, data validation rules, and data lineage documentation. * Collaborate with data analysts, d... More ❯
more layers of the technical stack (data, application, UI) Programming languages, particularly Python, Java, Javascript, TypeScript, SQL Cloud application and data deployment in AWS (EMR, EKS, ECR, RDS, etc.) DevOps tools and services (Kubernetes, Terraform, Docker, Packer, etc.) Integration with applications and data across a platform (e.g. APIs) Developing More ❯
to automate quality control processes Requirements Over 3+ years of hands-on Python development experience for production systems Proficient in data cloud computing technologies (EMR, Databricks) within leading cloud platforms (AWS, Google Cloud, or Azure) Skilled in designing and implementing data pipelines using distributed storage platforms Demonstrated analytical skills More ❯
JDK 1.8) & prefer to have some experience with Spring Boot. Good to have experience using AWS cloud services (e.g., EC2, S3, Lambda, MSK, ECS, EMR, RDS, Athena etc. ) Experience working with Maven, Jenkins, Git etc. Understanding of database concepts and working knowledge with any of the vendors (Preferably Oracle More ❯
with DevOps Experience with Kubernetes and Helm Charts Experience with Docker and/or Podman Containerization best practices Experience with AWS Services (EKS, Lambdas, EMR, ECR, VPC, etc.) Experience with Terraform Desired Skills: Terraform and Packer with AWS (AMIs, Environment, Configuration Automation, etc.) Database ORM Tools (Flyway, etc.) CI More ❯
years of experience in scaling SaaS services and infrastructure Goal-driven, positive attitude with strong communication skills Proficient in AWS services, including EC2, RDS, EMR, VPC networking, S3, etc. Comfortable with one or more of: Python, Ruby, Java, Groovy, Bash Working knowledge with one or more of: Terraform, Ansible More ❯
solutions that truly matter. Key Responsibilities: Design, develop, and optimize scalable data pipelines using technologies such as Apache Spark, Apache Iceberg, Trino, OpenSearch, AWS EMR, NiFi, and Kubernetes containers. Ingest and move structured and unstructured data using approved methods into enterprise or local storage systems. Create robust ETL processes More ❯
databases, and services. Cloud infrastructure is deployed using AWS C2S available services in the IDL VPC boundaries. Commonly used AWS tools include Managed OpenSearch, EMR, EC2, Lambdas, etc. Microservices and Kubernetes container orchestration is used to deploy applications in a managed way Working with the ISSM and various networking More ❯
Architect AWS Certified Cloud Practitioner, SysOps Administrator (Associate), Certified DevOps Engineer, or Big Data (Specialty). 5 to 8+ years' experience with utilizing Amazon Web Services to create cloud-based applications such as (but not limited to) Lambda, SWF, SNS, SQS, Elasticsearch, CloudWatch, CloudFormation, S3, RDS, and Bedrock. More ❯
for customer data Implement least access principle Ensure reliability and security of microservices Skills and Requirements Proficiency in AWS services, such as EC2, RDS, EMR, VPC networking, S3, etc. Experience with programming languages like Python, Ruby, Java, Groovy, and Bash Familiarity with infrastructure as code tools like Terraform and More ❯
are looking for someone who can demonstrate an aptitude or willingness to learn some or all of the following technologies. AWS - S3, IAM, RDS, EMR, EC2, etc Linux Commands Trino Apache Spark Node.js JavaScript Preact.js Postgres MySQL HTML CSS Target Salary Range is $125k-$150k or more depending on More ❯
Are you interested in rapidly growing business? The Amazon ARTS team is responsible for creating core analytics tech capabilities and data engineering services for ROW (the rest of the world excl. NA and EU) countries. It comprises of platforms development, research science, software development and data engineering. ARTS … milestone planning and lead it to smooth launch with limited manager guidance. Help stakeholders and junior teammates ramp up with team internal and Amazon tech products (e.g., Redshift Cluster, Datanet, Query Performance). Develop data pipelines (daily/intra-day/real-time) independently. Routine KTLO on data … Ruby 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Our inclusive culture empowers Amazonians More ❯
Data Engineer II, Amazon Last Mile - Routing and Planning - DE As part of the Last Mile Science & Technology organization, you'll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and … job responsibilities Design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. Extract huge volumes of structured and unstructured data from various sources (Relational/Non-relational/No-SQL … data scientist) with a track record of manipulating, processing, and extracting value from large datasets Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases/data stores (object storage, document or key-value stores, graph More ❯
Engineer has the following responsibilities: Design, develop, and implement robust data pipelines and data warehousing solutions using AWS services (e.g., Redshift, Glue, S3, Lambda, EMR, Kinesis). Optimize data storage, retrieval, and processing for performance and scalability. Manage and maintain AWS data infrastructure, ensuring high availability and reliability. Develop … years of proven experience in data engineering, with a strong focus on AWS services. Proven expertise in AWS data services (Redshift, Glue, S3, Lambda, EMR, Kinesis, etc.). Strong proficiency in SQL and Python. Experience integrating data with Microsoft Power BI. Excellent communication and presentation skills, with the ability More ❯