Sunbury-On-Thames, London, United Kingdom Hybrid / WFH Options
BP Energy
PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR, Lambda). Experience with Azure or GCP is a plus. Experience building and managing data lakes and data warehouses. Strong understanding of distributed systems and big data processing. Experience More ❯
sunbury, south east england, united kingdom Hybrid / WFH Options
BP Energy
PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR, Lambda). Experience with Azure or GCP is a plus. Experience building and managing data lakes and data warehouses. Strong understanding of distributed systems and big data processing. Experience More ❯
guildford, south east england, united kingdom Hybrid / WFH Options
BP Energy
PySpark for data processing and automation. Strong command of SQL for data querying, transformation, and performance tuning. Deep experience with cloud platforms , preferably AWS (e.g., S3, Glue, Redshift, Athena, EMR, Lambda). Experience with Azure or GCP is a plus. Experience building and managing data lakes and data warehouses. Strong understanding of distributed systems and big data processing. Experience More ❯
Proven experience as a Data Architect or Lead Data Engineer in AWS environments* Deep understanding of cloud-native data services: S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda* Strong hands-on expertise in data modelling, distributed systems, and pipeline orchestration (Airflow, Step Functions)* Background in energy, trading, or financial markets is a strong plus* Excellent knowledge of Python, SQL, and More ❯
Manchester, Lancashire, England, United Kingdom Hybrid / WFH Options
Lorien
junior engineers and sharing best practices Staying ahead of the curve with emerging data technologies What You'll Bring: Solid hands-on experience with AWS (Glue, Lambda, S3, Redshift, EMR) Strong Python, SQL, and PySpark skills Deep understanding of data warehousing and lakehouse concepts Problem-solving mindset with a focus on performance and scalability Excellent communication skills across technical More ❯
Overview The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in … and scalable AI solutions for business problems Interact with customers directly to understand the business problem, assist in the implementation of their ML ecosystem Leverage Foundation Models on Amazon Bedrock and Amazon SageMaker to meet performance needs Analyze large historical data to automate and optimize key processes Communicate clearly with attention to detail, translating rigorous mathematical … compelling customer proposals and present to executives; proficient English communication in technical and business settings Preferred Qualifications Experience with AWS services (Amazon SageMaker, Amazon Bedrock, EMR, S3, EC2); AWS Certification (Solutions Architect Associate, ML Engineer Associate) preferred Knowledge of AI/ML, generative AI; hands-on prompt engineering and deploying hosting Large Foundational Models Experience More ❯
Milton Keynes, Buckinghamshire, South East, United Kingdom
Upbeat Ideas UK Ltd
and ensure consistency across various data sources. Document test results and provide detailed reports on data quality findings. Required Skills and Experience: Proven experience with AWS services such as EMR, Lambda, Redshift, Firehose, S3, Iceberg, Athena, and DynamoDB. Strong understanding of data ingestion, parsing, aggregation, and schema validation processes. Proficiency in SQL for data querying and validation. Experience with More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
will have strong Python development skills (must be able to design and write clean, maintainable, and testable code). Extensive AWS expertise, particularly across: Lambda Glue Glue Data Catalog EMR services API Gateway Relational database experience with Aurora Postgres (including query performance tuning). Spark experience, including pipelines using Spark on data stored in iceberg table format in S3 More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
data platform, with responsibilities including: Designing and implementing scalable data pipelines using Python and Apache Spark Building and orchestrating workflows using AWS services such as Glue , Lambda , S3 , and EMR Serverless Applying best practices in software engineering: CI/CD , version control , automated testing , and modular design Supporting the development of a lakehouse architecture using Apache Iceberg Collaborating with … engineering fundamentals: ETL/ELT, schema evolution, batch processing Experience or strong interest in Apache Spark for distributed data processing Familiarity with AWS data tools (e.g., S3, Glue, Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and engaging with stakeholders Bonus Skills Experience with Apache Iceberg or similar table formats (e.g., Delta Lake More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Robert Half
and fine-tune query performance on Aurora Postgres and other relational databases. Architect and manage data solutions on AWS using serverless technologies such as Lambda, Glue, Glue Data Catalog, EMR serverless, and API Gateway. Implement and manage large-scale data processing with Spark (Iceberg tables in S3, Gold layer in Aurora Postgres). Collaborate with data scientists, analysts, and … extensible, and testable code. Proven experience with relational databases (Aurora Postgres preferred), including performance optimisation. Extensive AWS experience, particularly with serverless data engineering tools (Lambda, Glue, Glue Data Catalog, EMR serverless, API Gateway, S3). Solid Spark experience with large-scale data pipelines and data lakehouse architectures (Iceberg format a plus). Hands-on experience with data modelling and More ❯