Have: ● Familiarity with data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI More ❯
Have: ● Familiarity with data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
with data frameworks (eg, Spark, Kafka, Airflow) and ETL workflows. Proficiency with SQL and NoSQL databases, including query optimization. Experience with cloud data services (eg, AWS Redshift, S3, Glue, EMR) and CI/CD for data pipelines. Strong programming skills in Python, Java, or Scala. Excellent problem-solving and collaboration skills. Ability to thrive in a fast-paced, dynamic More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
and delivering production-grade software and data systems. Proficiency in Python, Java, or Scala - comfortable writing robust, testable, and scalable code. Deep experience with AWS (Lambda, ECS/EKS, EMR, Step Functions, S3, IAM, etc.). Strong knowledge of distributed systems and streaming/data pipelines (Kafka, Spark, Delta, Airflow, etc.). Familiarity with infrastructure-as-code (Terraform, CloudFormation More ❯
site Must be SC cleared, or eligible to obtain SC-level security clearance Job description We are looking for a developer with expertise in Python, AWS Glue, step function, EMR cluster and redshift, to join our AWS Development Team. You will be serving as the functional and domain expert in the project team to ensure client expectations are met. More ❯
AI Data Engineer- 2 Positions Boston, MA Client: Movate Long term contract Onsite $60 per hour W2 A robust background in AWS services such as Lambda, Glue, S3, EMR, SNS, SQS, CloudWatch, Redshift, and Bedrock. - Strong expertise in SQL and relational databases like Oracle, MySQL, and PostgreSQL. - Familiarity with the Salesforce platform, including data models and objects. - Proficiency in More ❯
London, England, United Kingdom Hybrid / WFH Options
Harnham
as Airflow, DBT, Databricks, and data catalogue/observability tools (e.g. Monte Carlo, Atlan, Datahub). Knowledge of cloud infrastructure (AWS or GCP) - including services such as S3, RDS, EMR, ECS, IAM. Experience with DevOps tooling, particularly Terraform and CI/CD pipelines (e.g. Jenkins). A proactive, growth-oriented mindset with a passion for modern data and platform More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Harnham - Data & Analytics Recruitment
as Airflow, DBT, Databricks, and data catalogue/observability tools (e.g. Monte Carlo, Atlan, Datahub). Knowledge of cloud infrastructure (AWS or GCP) - including services such as S3, RDS, EMR, ECS, IAM. Experience with DevOps tooling, particularly Terraform and CI/CD pipelines (e.g. Jenkins). A proactive, growth-oriented mindset with a passion for modern data and platform More ❯
database optimization skills. Excellent problem-solving and analytical skills. Good to Have: Knowledge of Python/PySpark for big data solutions. Experience with AWS cloud services (S3, Glue, Athena, EMR, Redshift Mandatory Skills: Ab Initio (link removed) Experience: 5-8 Years (link removed) The expected compensation for this role ranges from $60,000 to $135,000 (link removed) Final More ❯
migration, metadata, and master data management. Experienced delivering data analytics solutions and supporting AI initiatives. Hands on experience with cloud data platforms, ideally AWS (S3, Kinesis, Glue, Redshift, Lambda, EMR) and/or Azure (ADF, Synapse, Fabric, AzureFunctions). Familiarity with modern data lakehouse environments such as Databricks, Snowflake, or Microsoft Fabric (highly desirable). Experience with Palantir Foundry More ❯
AWS Data Engineer - 1 Position New York Long Term Contract Onsite A robust background in AWS services such as Lambda, Glue, S3, EMR, SNS, SQS, CloudWatch, Redshift, and Bedrock. - Strong expertise in SQL and relational databases like Oracle, MySQL, and PostgreSQL. - Excellent in Data Architecture, Data Modeling, Snowflake - Familiarity with the Salesforce platform, including data models and objects. - Proficiency More ❯
to shape it with us. Your role will involve: Designing and developing scalable, testable data pipelines using Python and Apache Spark Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing Contributing to the development of a lakehouse architecture using Apache More ❯
issues and deliver solutions for high availability and reliability What We’re Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
issues and deliver solutions for high availability and reliability What We’re Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
similar role. Experience of deploying & managing cloud infrastructure for Data Solutions Hands-on experience of working with AWS services, including but not limited to EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc. Good experience in setting up reliable cloud networking that is highly secured. Experience of setting up standard cloud governance policies through IAM roles. Extensive experience More ❯
similar role. Experience of deploying & managing cloud infrastructure for Data Solutions Hands-on experience of working with AWS services, including but not limited to EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc. Good experience in setting up reliable cloud networking that is highly secured. Experience of setting up standard cloud governance policies through IAM roles. Extensive experience More ❯
similar role. Experience of deploying & managing cloud infrastructure for Data Solutions Hands-on experience of working with AWS services, including but not limited to EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc. Good experience in setting up reliable cloud networking that is highly secured. Experience of setting up standard cloud governance policies through IAM roles. Extensive experience More ❯
London, England, United Kingdom Hybrid / WFH Options
Cint
move fast, stay compliant and take end-to-end responsibility for their products. Major elements of our platform include AWS (we make significant use of S3, RDS, Kinesis, EC2, EMR, ElastiCache, ElasticSearch and EKS). Elements of the platform will start to expand into GCP (Compute Engine, Cloud Storage, Google Kubernetes Engine and BigQuery). Other significant tools of More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯