Hands-on experience with Big Data ecosystems Hadoop, Spark, Kafka, Hive, HBase, etc. Strong experience with Cloud platforms (AWS/Azure/GCP) and services like: AWS: S3, Glue, EMR, Redshift, Lambda, Kinesis Azure: Data Factory, Synapse, Databricks, ADLS GCP: BigQuery, Dataflow, Pub/Sub Experience with Data Warehouse/Data Lake/Lakehouse design and modeling (Kimball, OLAP More ❯
SAS , Python, AWS cloud-native technologies, S3, Athena, Redshift Experience in snowflake is an added bonus. Familiarity with the following technologies: Hadoop, Kafka, Airflow, Hive, Presto, Athena, S3, Aurora, EMR, Spark Ability to drive, contribute to, and communicate solutions to technical product challenges Ability to roll-up your sleeves and work in process or resource gaps and fill it More ❯
solutions. Strong hands-on experience with at least one major cloud platform (AWS, Azure, or Google Cloud Expertise in cloud-native data services such as AWS Glue, Lambda, S3, EMR, Redshift, Lake Formation, Azure Synapse, Data Factory, Databricks, or BigQuery. Advanced knowledge of SQL, Python, Spark, PySpark, and distributed data frameworks. Proven background in building ELT/ETL systems More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
business decisions. Responsibilities for the AWS Data Engineer: Design, build and maintain scalable data pipelines and architectures within the AWS ecosystem Leverage services such as AWS Glue, Lambda, Redshift, EMR and S3 to support data ingestion, transformation and storage Work closely with data analysts, architects and business stakeholders to translate requirements into robust technical solutions Implement and optimise ETL More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
Greater Bristol Area, United Kingdom Hybrid/Remote Options
Anson McCade
create measurable impact. Key Responsibilities Design, build, and deploy production-grade data pipelines from ingestion through to consumption Engineer large-scale data solutions using AWS cloud technologies such as EMR, Glue, Lambda, Redshift, DynamoDB, Kinesis, and related analytics services Develop robust ETL/ELT processes using Python, Java, Scala, Spark, and SQL Integrate, process, and transform complex structured and More ❯
Ballwin, Missouri, United States Hybrid/Remote Options
Eye Care Partners Career
the following: Snowflake development and support Advanced SQL knowledge with strong query writing skills Object-oriented/object function scripting languages: Python, Java, Scala, etc. AWS cloud services: EC2, EMR, RDS, DMS Relational databases such as SQL Server and object relational databases such as PostgreSQL Data analysis, ETL, and workflow automation Multiple ETL/ELT tools and cloud-based More ❯
Belfast, City of Belfast, County Antrim, United Kingdom Hybrid/Remote Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
data pipelines. Collaborate with data scientists and analysts to ensure data quality, availability, and consistency for advanced modeling and reporting. Utilize AWS or other cloud services (e.g., S3, Glue, EMR, Snowflake) to architect and maintain cloud-based data ecosystems. Write and optimize complex SQL queries for data extraction, integrity checks, and performance tuning. Required Technical Skills 5+ years of More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
Skills • Min of 2 years of experience in data engineering or a similar role. • Hands-on experience with core AWS data services (for example S3, Glue, Athena, Lambda, IAM, EMR). • Strong SQL skills (joins, window functions, optimization). • Solid Python for data processing. • Experience building production ETL/ELT pipelines. • Working knowledge of security and IAM (roles, policies More ❯
/cloud architecture, operating systems (e.g., Linux), and storage systems (e.g., AWS, Databricks, Cloudera) Production experience in core data technologies (e.g. Spark, HDFS, Snowflake, Databricks, Redshift, & AmazonEMR) Development of APIs and web server applications (e.g. Flask, Django, Spring) Complete software development lifecycle experience, including design, documentation, implementation, testing, and deployment Excellent communication and presentation skills; previous More ❯
/ML workflows. Advanced SQL skills for complex data manipulation, optimization, and analytics. Knowledge of orchestration tools (e.g., Airflow, Dagster, Prefect). Experience developing sophisticated data pipelines using AWS EMR, Apache Spark, etc. Creative-minded individual, enjoys open-ended problems and challenging the status quo Excellent written and spoken communication skills Ability to conduct independent work and manage projects More ❯
and delivering production-grade software and data systems. Proficiency in Python, Java, or Scala - comfortable writing robust, testable, and scalable code. Deep experience with AWS (Lambda, ECS/EKS, EMR, Step Functions, S3, IAM, etc.). Strong knowledge of distributed systems and streaming/data pipelines (Kafka, Spark, Delta, Airflow, etc.). Familiarity with infrastructure-as-code (Terraform, CloudFormation More ❯
SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 5+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data More ❯
SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data More ❯
SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data More ❯
SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data More ❯
Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single More ❯
Solid experience writing clean, concise, tested, maintainable code in python. Experience building and deploying applications in AWS API development experience Experience utilizing Spark and SQL Experience with Databricks or EMR Experience with CI/CD and updating build pipelines and pipeline tasks Hands-on production experience working with Big Data platforms, building and optimizing data pipelines that involve data More ❯
CloudWatch, and CloudTrail. Ensure designs follow AWS Well-Architected Framework principles (security, cost, performance, reliabilityData Engineering & Pipelines Build and optimize data pipelines in AWS using Glue, Lambda, Step Functions, EMR, Athena, and S3. Implement DAG-based orchestration using Apache Airflow, AWS Managed Workflows (MWAA), or Glue Workflows. Ensure data quality, reliability, lineage, and observability across all pipelines.Machine Learning Pipeline … OLTP, OLAP, lakehouse) and architectural patterns. Experience with CI/CD pipelines and DevOps on AWS.Preferred Qualifications AWS certifications (e.g., AWS Certified Data Analytics Specialty, Solutions Architect Experience with EMR, Redshift, Kinesis, or Kafka. Knowledge of MLOps tools (SageMaker, MLflow, Feature Stores Familiarity with IaC (Terraform, CloudFormation Experience working in enterprise-scale, highly regulated environments.Soft Skills Strong communication and More ❯
Dallas, Texas, United States Hybrid/Remote Options
Aecom
Dallas, TX. Key Responsibilities: Lead the end-to-end design, development, and optimization of scalable data pipelines and products on AWS, leveraging services such as S3, Glue, Redshift, Athena, EMR, and Lambda. Provide day-to-day technical leadership and mentorship to a team of data engineers-setting coding standards, reviewing pull requests, and fostering a culture of engineering excellence. … or education 3+ years in a technical-lead or team-lead capacity delivering enterprise-grade solutions. Deep expertise in AWS data and analytics services: e.g.; S3, Glue, Redshift, Athena, EMR/Spark, Lambda, IAM, and Lake Formation. Proficiency in Python/PySpark or Scala for data engineering, along with advanced SQL for warehousing and analytics workloads. Demonstrated success designing More ❯
optimization. Experience with ServiceNow for incident/change/problem management. Excellent analytical, troubleshooting, and communication skills. Nice to Have Exposure to cloud-based Big Data platforms (e.g., AWS EMR Familiarity with containerization (Docker, Kubernetes) and infrastructure automation tools (Ansible, Terraform Note: If you are interested, please share your updated resume and suggest the best number & time to connect More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
projects. Required Skills & Qualifications: Demonstrable experience in building data pipelines using Spark or Pandas. Experience with major cloud providers (AWS, Azure, or Google). Familiarity with big data platforms (EMR, Databricks, or DataProc). Knowledge of data platforms such as Data Lakes, Data Warehouses, or Data Meshes. Drive for self-improvement and eagerness to learn new programming languages. Ability More ❯