Have: ● Familiarity with data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI More ❯
Have: ● Familiarity with data security, privacy, and compliance frameworks ● Exposure to machine learning pipelines, MLOps, or AI-driven data products ● Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark ● Exposure to AI/ML concepts and collaboration with data science or AI teams. ● Experience integrating data solutions with AI/ML platforms or supporting AI More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
watford, hertfordshire, east anglia, united kingdom
Addition+
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
CD workflows and YAML-based configuration Confident using Git for version control Understanding of AWS security, IAM, and access management principles Desirable: Knowledge of Athena, SQS, CloudWatch, CloudTrail, or EMR Exposure to Infrastructure as Code (Terraform, CloudFormation) Experience working in secure or consulting environments More ❯
Troubleshoot issues and deliver solutions for high availability and reliability What Were Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
london (city of london), south east england, united kingdom
Accelero
Troubleshoot issues and deliver solutions for high availability and reliability What Were Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
issues and deliver solutions for high availability and reliability What We're Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
issues and deliver solutions for high availability and reliability What We’re Looking For Experience 4+ years in a senior technical role, ideally within data engineering AWS stack: S3, EMR, EC2, Kinesis, Firehose, Flink/MSF Python & SQL Security and risk management frameworks Excellent communication and collaboration skills Desirable Event streaming and event streaming analytics in real-world applications More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
london (city of london), south east england, united kingdom
Luxoft
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
london (city of london), south east england, united kingdom
Luxoft
and maintain robust, scalable data pipelines and infrastructure that power our analytics, machine learning, and business intelligence initiatives. You'll work with cutting-edge technologies like Python, PySpark, AWS EMR, and Snowflake, and collaborate across teams to ensure data is clean, reliable, and actionable. Responsibilities: - Build and maintain scalable ETL pipelines using Python and PySpark to support data ingestion … transformation, and integration - Develop and optimize distributed data workflows on AWS EMR for high-performance processing of large datasets - Design, implement, and tune Snowflake data warehouses to support analytical workloads and reporting needs - Partner with data scientists, analysts, and product teams to deliver reliable, well-documented datasets - Ensure data integrity, consistency, and accuracy across multiple sources and systems - Automate … and knowledge-sharing Mandatory Skills Description: - 5+ years of experience in data engineering or software development - Strong proficiency in Python and PySpark - Hands-on experience with AWS services, especially EMR, S3, Lambda, and Glue - Deep understanding of Snowflake architecture and performance tuning - Solid grasp of data modeling, warehousing concepts, and SQL optimization - Familiarity with CI/CD tools (e.g. More ❯
Track record of delivering data analytics and AI/ML-enabling solutions across complex environments. Hands-on experience with cloud data platforms , ideally AWS (S3, Kinesis, Glue, Redshift, Lambda, EMR). Experience with Azure technologies (ADF, Synapse, Fabric, Azure Functions) is also valued. Strong understanding of modern data lakehouse architectures , such as Databricks , Snowflake , or Microsoft Fabric (highly desirable More ❯
Track record of delivering data analytics and AI/ML-enabling solutions across complex environments. Hands-on experience with cloud data platforms , ideally AWS (S3, Kinesis, Glue, Redshift, Lambda, EMR). Experience with Azure technologies (ADF, Synapse, Fabric, Azure Functions) is also valued. Strong understanding of modern data lakehouse architectures , such as Databricks , Snowflake , or Microsoft Fabric (highly desirable More ❯
DevOps tools and techniques. What You'll Bring Proven experience in DevOps or a similar engineering role. Proven experience deploying and managing AWS infrastructure (EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc.). Strong background in Terraform for infrastructure as code. Expertise in CI/CD automation using GitHub Actions. Hands-on experience managing Linux environments and More ❯
DevOps tools and techniques. What You'll Bring Proven experience in DevOps or a similar engineering role. Proven experience deploying and managing AWS infrastructure (EC2, S3, RDS, Lambda, Glue, EMR, VPC, IAM, Redshift, etc.). Strong background in Terraform for infrastructure as code. Expertise in CI/CD automation using GitHub Actions. Hands-on experience managing Linux environments and More ❯