London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
business decisions. Responsibilities for the AWS Data Engineer: Design, build and maintain scalable data pipelines and architectures within the AWS ecosystem Leverage services such as AWS Glue, Lambda, Redshift, EMR and S3 to support data ingestion, transformation and storage Work closely with data analysts, architects and business stakeholders to translate requirements into robust technical solutions Implement and optimise ETL More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical and non More ❯
site Must be SC cleared, or eligible to obtain SC-level security clearance Job description We are looking for a developer with expertise in Python, AWS Glue, step function, EMR cluster and redshift, to join our AWS Development Team. You will be serving as the functional and domain expert in the project team to ensure client expectations are met. More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
projects. Required Skills & Qualifications: Demonstrable experience in building data pipelines using Spark or Pandas. Experience with major cloud providers (AWS, Azure, or Google). Familiarity with big data platforms (EMR, Databricks, or DataProc). Knowledge of data platforms such as Data Lakes, Data Warehouses, or Data Meshes. Drive for self-improvement and eagerness to learn new programming languages. Ability More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom Hybrid/Remote Options
Reed
What We’re Looking For Experience building data pipelines using Spark or Pandas . Familiarity with major cloud platforms (AWS, Azure, or GCP). Understanding of big data tools (EMR, Databricks, DataProc). Knowledge of data architectures (Data Lakes, Warehouses, Mesh). A proactive mindset with a passion for learning new technologies. Nice-to-Have Skills Automated data quality More ❯
change) and driven to achieve a full end to end continuous deployment pipeline. Major elements of our platform include AWS (we make significant use of S3, RDS, Kinesis, EC2, EMR, ElastiCache, ElasticSearch and EKS). Elements of the platform will start to expand into GCP (Compute Engine, Cloud Storage, Google Kubernetes Engine and BigQuery). Other significant tools of More ❯
and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the … have 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python More ❯
site Must be SC cleared, or eligible to obtain SC-level security clearance Job description We are looking for a developer with expertise in Python, AWS Glue, step function, EMR cluster and redshift, to join our AWS Development Team click apply for full job details More ❯
Performance-related bonus Private medical care And many more Role and Responsibilities Develop and maintain AWS-based data pipelines using Python, PySpark, Spark SQL, AWS Glue, Step Functions, Lambda, EMR, and Redshift. Design, implement, and optimise data architecture for scalability, performance, and security. Work closely with business and technical stakeholders to understand requirements and translate them into robust solutions. … client workshops, gather feedback, and provide technical guidance. Required Skills & Experience Strong hands-on experience in Python, PySpark, and Spark SQL. Proven expertise in AWS Glue, Step Functions, Lambda, EMR, and Redshift. Solid understanding of cloud architecture, security, and scalability best practices. Experience designing and implementing CI/CD pipelines for data workflows. Proven ability to structure solutions, solve More ❯