experience in ETL development and data engineering. Hands-on experience with Snowflake including data modeling, performance tuning, and SQL scripting. Proficiency in AWS services such as S3, Lambda, Glue, Redshift, and CloudWatch. Strong programming skills in Python or Scala for data processing. Experience with orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with version control systems (e.g. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
strategic business decisions. Responsibilities for the AWS Data Engineer: Design, build and maintain scalable data pipelines and architectures within the AWS ecosystem Leverage services such as AWS Glue, Lambda, Redshift, EMR and S3 to support data ingestion, transformation and storage Work closely with data analysts, architects and business stakeholders to translate requirements into robust technical solutions Implement and optimise More ❯
anomaly detection, and GenAI-powered automation Support GenAI initiatives through data readiness, synthetic data generation, and prompt engineering. Mandatory Skills: Cloud Platforms:Deep experience with AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica More ❯
big data processing frameworks (e.g., Spark, Flink). Strong experience with cloud platforms (AWS, Azure, GCP) and related data services. Hands-on experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery), Databricks running on multiple cloud platforms (AWS, Azure and GCP) and data lake technologies (e.g., S3, ADLS, HDFS). Expertise in containerization and orchestration tools like Docker and More ❯
Data Engineers to join a large on prem to AWS migration programme in replicating data pipelines in readiness for a lift and shift to the cloud. Essential: AWS and Redshift In full: Role purpose Reporting to the Lead Data Engineer, the Data Engineer is responsible for designing and maintaining scalable data pipelines, ensuring data availability, quality, and performance to More ❯
South West London, London, United Kingdom Hybrid/Remote Options
ARC IT Recruitment Ltd
based architecture. You will need to have skills and experience in the following: Cloud Data Architecture: Strong knowledge of modern, cloud-based data architecture and tooling (e.g., S3, Glue, Redshift, Athena, Lake Formation, Iceberg/Delta). AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Arc IT Recruitment
based architecture. You will need to have skills and experience in the following: Cloud Data Architecture: Strong knowledge of modern, cloud-based data architecture and tooling (e.g., S3, Glue, Redshift, Athena, Lake Formation, Iceberg/Delta). AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS. ETL/Orchestration Expertise: Expertise in ETL/ELT More ❯
with analysts and architects to ensure compliance with government security standards. Troubleshoot and resolve issues in complex cloud environments. Essential Skills Strong experience with AWS services (Glue, Lambda, S3, Redshift, IAM). Proficiency in Python and SQL for data engineering tasks. Knowledge of data modelling, ETL frameworks, and best practices. Familiarity with security and compliance in government or regulated More ❯
with analysts and architects to ensure compliance with government security standards. Troubleshoot and resolve issues in complex cloud environments. Essential Skills Strong experience with AWS services (Glue, Lambda, S3, Redshift, IAM). Proficiency in Python and SQL for data engineering tasks. Knowledge of data modelling, ETL frameworks, and best practices. Familiarity with security and compliance in government or regulated More ❯
or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. Snowflake, BigQuery, Redshift). Strong understanding of data governance, security, and compliance (experience within financial services is a plus). Leadership experience, with the ability to mentor, influence, and set technical direction. More ❯
London, South East, England, United Kingdom Hybrid/Remote Options
Involved Solutions
persuasively Desirable Skills for the Senior Data Engineer: Experience with event sourcing, dbt, or related data transformation tools Familiarity with PostgreSQL and cloud-native data services (Azure Event Hub, Redshift, Kinesis, S3, Blob Storage, OneLake, or Microsoft Fabric) Understanding of machine learning model enablement and operationalisation within data architectures Experience working within Agile delivery environments If you are an More ❯
AWS Developer Location: Jersey City ,NJ Requirement: Design and develop scalable ETL pipelines using AWS services such as: AWS Glue for serverless data integration AWS Lambda for lightweight transformations Amazon S3 for data lake storage AmazonRedshift or RDS for data warehousing Integrate data from diverse sources including APIs, databases, and flat files into AWS-based data … data transformation logic using PySpark, Python, or SQL within AWS Glue or Lambda. Monitor, schedule, and orchestrate ETL workflows using AWS Step Functions, Glue Workflows, or Apache Airflow on Amazon MWAA. Ensure data quality, consistency, and lineage using AWS Glue Data Catalog and AWS Lake Formation. Optimize ETL performance and cost-efficiency through partitioning, parallelism, and resource tuning. Implement More ❯
warehousing, data lakes, and ETL processes. Programming: Experience with programming languages like Python, Java, or Scala. Cloud Data Platform: Strong experience with Cloud Data Platform like Databricks (preferred) , Snowflake, Redshift, etc. Databricks Expertise: Deep understanding of Databricks architecture, including clusters, jobs, notebooks, and workspaces. Database Technologies: Knowledge of relational and NoSQL databases. Infrastructure as Code: Experience with tools like More ❯
seamless data processing and analytics. Hands-on experience in data integration , including designing and optimising data pipelines (batch and streaming) and integrating cloud-based platforms (e.g., Azure Synapse, AWS Redshift, Google BigQuery ) with legacy systems, ensuring performance and scalability. Deep knowledge of ETL/ELT processes , leveraging tools like Apache Airflow, dbt, or Informatica , with a focus on ensuring More ❯
lakes.Ensure data quality, consistency, and integrity throughout the data lifecycle.Develop and maintain documentation for data models and ETL processes. Minimum Requirements:5+ years' experience working within AWS technologies (S3, Redshift, RDS, etc.).3+ years' experience working with dbt.3+ years of experience with orchestration tooling (Airflow, Prefect, Dagster).Strong programming skills in languages such as Python or Java.Familiarity with big More ❯
Manchester, Lancashire, England, United Kingdom Hybrid/Remote Options
Lorien
with 2+ years in a technical leadership or management role Strong technical proficiency in data modelling, data warehousing, and distributed systems Hands-on experience with cloud data services (AWS Redshift, Glue, EMR or equivalent) Solid programming skills in Python and SQL Familiarity with DevOps practices (CI/CD, Infrastructure as Code - e.g., Terraform) Excellent communication skills with both technical More ❯
. Hands-on experience with data orchestration tools (Airflow, dbt, Dagster, or Prefect). Solid understanding of cloud data platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift). Experience with streaming technologies (Kafka, Kinesis, or similar). Strong knowledge of data modelling, governance, and architecture best practices . Excellent leadership, communication, and stakeholder management skills. NICE More ❯
Excellent analytical skills, attention to detail, and ability to interpret SAP business data. Preferred Skills Experience working with SAP extractors, IDOCs, orBW data flows. Familiarity with Snowflake, BigQuery,AWS Redshift, or other cloud data platforms. Exposure to data testing frameworks like Great Expectations or PyTest. Understanding of BI tools such as Power BI, Tableau, or SAP Analytics Cloud. Knowledge More ❯
with CI/CD toolsets such as: CloudFormation, BitBucket Working knowledge of AWS Serverless architecture Working knowledge of Relational database management systems and data integration tools. IDEAL CANDIDATE: - AWS- Redshift, EMR, Athena, - 3 plus years - Python/C++ or any programming language background _ SQL/Oracle or SQL Server experience plus on Data analytics, data mgt, Data Warehouse knowledge. More ❯
Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices Capital One will consider sponsoring More ❯
of experience with UNIX/Linux including basic commands and shell scripting 2+ year experience working on real-time data and streaming applications 2+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related More ❯
of experience with UNIX/Linux including basic commands and shell scripting 2+ year experience working on real-time data and streaming applications 2+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related More ❯
of experience with UNIX/Linux including basic commands and shell scripting 2+ year experience working on real-time data and streaming applications 2+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related More ❯
of experience with UNIX/Linux including basic commands and shell scripting 2+ year experience working on real-time data and streaming applications 2+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related More ❯
of experience with UNIX/Linux including basic commands and shell scripting 2+ year experience working on real-time data and streaming applications 2+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related More ❯