Belfast, City of Belfast, County Antrim, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Aspire Personnel Ltd
in AWS cloud technologies for ETL pipeline, data warehouse and data lake design/building and data movement. AWS data and analytics services (or open-source equivalent) such as EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. What you can expect Work to agile best practices and cross-functionally with multiple teams and stakeholders. You’ll be using your technical skills More ❯
the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL Server, Oracle, PostgreSQL, MySQL) NoSQL (e.g., MongoDB, Cassandra, DynamoDB, Neo4j) Solid understanding of software engineering best practices - code reviews, testing frameworks, CI/CD, and code More ❯
Leeds, West Yorkshire, England, United Kingdom Hybrid / WFH Options
Robert Walters
Experience: Proven experience as a Senior Data Engineer in a large-scale or complex environment. Strong hands-on expertise with AWS data services (e.g., S3, Glue, Lambda, Redshift, Athena, EMR). Experience building data lakes and modern data platforms from the ground up. Proficiency with Python, SQL, and orchestration tools such as Airflow or dbt. Strong understanding of data More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
watford, hertfordshire, east anglia, united kingdom
Addition+
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
Main Skills Needed: Proven experience in BI and data platform testing, including ETL, data warehouse, and reporting validation. Strong hands-on knowledge of AWS data tools (Glue, Redshift, Athena, EMR, Lambda). Confident with Power BI, SQL, Python/PySpark, and QA automation tools. Solid grasp of data governance, GDPR, and data quality standards. Background in Agile delivery and More ❯
requirements into technical architecture. Provide technical leadership and guidance to engineering teams. Required Skills & Experience: Core Technical Expertise Strong hands-on skills in AWS Data Services (S3, Redshift, Glue, EMR, Kinesis, Lake Formation, DynamoDB). Expertise in Apache Kafka (event streaming) and Apache Spark (batch and streaming). Proficiency in Python for data engineering and automation. Strong knowledge of More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (Spark, MapReduce, Hadoop, Hive, EMR, Kafka, Gurobi, or MySQL) 4+ years of experience designing, building and optimizing data pipelines and ETL workflows at scale 4+ years of experience with UNIX/Linux including basic More ❯
SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 5+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data More ❯
SQL, Scala, or Java 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 2+ year experience working on real-time data and streaming applications 2+ years of experience with NoSQL implementation (Mongo, Cassandra) 2+ years of data More ❯