in software development. 2+ years of experience with microservices, Kubernetes, or AWS EKS. 2+ years of experience with AWS services (e.g., S3, EMR, Glue, EC2, ECS, IAM, Lambda, CodeBuild , Athena, Redis, ElasticSearch , RDS, Aurora, and Airflow). Proficiency in CI/CD tools like Jenkins, Terraform, or similar automation tools. Strong SQL skills with hands-on experience using data More ❯
flows using Apache Kafka, Apache Nifi and MySQL/PostGreSQL Develop within the components in the AWS cloud platform using services such as RedShift, SageMaker, API Gateway, QuickSight, and Athena Communicate with data owners to set up and ensure configuration parameters Document SOP related to streaming configuration, batch configuration or API management depending on role requirement Document details of … and problem-solving skills Experience in instituting data observability solutions using tools such as Grafana, Splunk, AWS CloudWatch, Kibana, etc. Experience in container technologies such as Docker, Kubernetes, and Amazon EKS Qualifications: Ability to obtain an Active Secret clearance or higher Bachelors Degree in Computer Science, Engineering, or other technical discipline required, OR a minimum of 8 years equivalent More ❯
will be working on complex data problems in a challenging and fun environment, using some of the latest Big Data open-source technologies like Apache Spark, as well as Amazon Web Service technologies including Elastic MapReduce, Athena and Lambda to develop scalable data solutions. Key Responsibilities: Adhering to Company Policies and Procedures with respect to Security, Quality and More ❯
and customise them for different use cases. Develop data models and Data Lake designs around stated use cases to capture KPIs and data transformations. Identify relevant AWS services - on Amazon EMR, Redshift, Athena, Glue, Lambda, to design an architecture that can support client workloads/use-cases; evaluate pros/cons among the identified options to arrive at More ❯
and customise them for different use cases. Develop data models and Data Lake designs around stated use cases to capture KPIs and data transformations. Identify relevant AWS services – on Amazon EMR, Redshift, Athena, Glue, Lambda, to design an architecture that can support client workloads/use-cases; evaluate pros/cons among the identified options to arrive at More ❯
Use Terraform to automate infrastructure provisioning, deployment, and configuration, ensuring efficiency and repeatability in cloud environments. Database Design & Optimisation : Design and optimise complex SQL queries, and relational databases (e.g., Amazon Redshift, PostgreSQL, MySQL) to enable fast, efficient data retrieval and analytics. Data Transformation : Apply ETL/ELT processes to transform raw financial data into usable insights for business intelligence … understanding of data engineering concepts, including data modelling, ETL/ELT processes, and data warehousing. Proven experience with AWS services (e.g., S3, Redshift, Lambda, ECS, ECR, SNS, Eventbridge, CloudWatch, Athena etc.) for building and maintaining scalable data solutions in the cloud. Technical Skills (must have): Python: Proficient in Python for developing custom ETL solutions, data processing, and integration with More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Key Responsibilities Design, build, and maintain robust data pipelines using AWS services (Glue, Lambda, Step Functions, S3, etc.) Develop and optimize data lake and data warehouse solutions using Redshift, Athena, and related technologies Collaborate with data scientists, analysts, and business stakeholders to understand data requirements Ensure data quality, governance, and compliance with financial regulations Implement CI/CD pipelines … Proven experience as a Data Engineer working in cloud- environments (AWS ) Strong proficiency with Python and SQL Extensive hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools. Familiarity with DevOps practices and infrastructure-as-code (e.g., Terraform, CloudFormation) Solid understanding of data modeling, ETL frameworks, and More ❯
Collaborate with development teams to design and implement automated tests for microservices, emphasizing Spring Boot and Java-based architectures. Implement testing strategies for AWS data lakes (e.g., S3, Glue, Athena) with a focus on schema evolution, data quality rules, and performance benchmarks, prioritizing data lake testing over traditional SQL approaches. Automate data tests within CI/CD workflows to … maintain scalable test automation frameworks, with a focus on backend, API, and data systems using tools like Pytest and Postman. Expertise in Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures More ❯
Collaborate with development teams to design and implement automated tests for microservices, emphasizing Spring Boot and Java-based architectures. Implement testing strategies for AWS data lakes (e.g., S3, Glue, Athena) with a focus on schema evolution, data quality rules, and performance benchmarks, prioritizing data lake testing over traditional SQL approaches. Automate data tests within CI/CD workflows to … maintain scalable test automation frameworks, with a focus on backend, API, and data systems using tools like Pytest and Postman. Expertise in Pandas, SQL, and AWS analytics services (Glue, Athena, Redshift) for data profiling, transformation, and validation within data lakes. Solid experience with AWS (S3, Lambda, EMR, ECS/EKS, CloudFormation/Terraform) and understanding of cloud-native architectures More ❯
case to adopt new technologies Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS - Aurora/Redshift/Athena/S3) Support users and operational flows for quantitative risk, senior management and portfolio management teams using the tools developed Qualifications/Skills Required Advance degree in computer science More ❯
organizational levels. Analytical, organizational, and problem-solving skills. Experience with data observability tools like Grafana, Splunk, AWS CloudWatch, Kibana, etc. Knowledge of container technologies such as Docker, Kubernetes, and Amazon EKS. Education Requirements: Bachelor’s Degree in Computer Science, Engineering, or related field, or at least 8 years of equivalent work experience. 8+ years of IT data/system More ❯
structured queries Hands on experience with DBT, building and maintaining modular, scalable data models that follow best practices Strong understanding of dimensional modelling Familiarity with AWS data services ( S3, Athena, Glue ) Experience with Airflow for scheduling and orchestrating workflows Experience working with data lakes or modern data warehouses ( Snowflake, Redshift, BigQuery ) A pragmatic problem solver who can balance technical More ❯
structured queries Hands on experience with DBT, building and maintaining modular, scalable data models that follow best practices Strong understanding of dimensional modelling Familiarity with AWS data services (S3, Athena, Glue) Experience with Airflow for scheduling and orchestrating workflows Experience working with data lakes or modern data warehouses (Snowflake, Redshift, BigQuery) A pragmatic problem solver who can balance technical More ❯
data architecture principles and how these can be practically applied. Experience with Python or other scripting languages Good working knowledge of the AWS data management stack (RDS, S3, Redshift, Athena, Glue, QuickSight) or Google data management stack (Cloud Storage, Airflow, Big Query, DataPlex, Looker) About Our Process We can be flexible with the structure of our interview process if More ❯
and building agent AI systems Our technology stack Python and associated ML/DS libraries (Scikit-learn, Numpy, LightlGBM, Pandas, TensorFlow, etc...) PySpark AWS cloud infrastructure: EMR, ECS, S3, Athena, etc. MLOps: Terraform, Docker, Airflow, MLFlow, Jenkins More Information Enjoy fantastic perks like private healthcare & dental insurance, a generous work from abroad policy, 2-for-1 share purchase plans More ❯
and data ingestion tools such as Airflow and Stitch, along with Python scripting for integrating diverse data sources. Large-scale data processing: Proficient with distributed query engines like AWS Athena or SparkSQL for working with datasets at the scale of billions of rows. Event streaming data: Experienced in working with live streamed event data, including transforming and modeling real More ❯
experience and a passion for developing and operating data-oriented solutions using Python, Airflow/Composer, Kafka, Snowflake, BigQuery, and a mix of data platforms such as Spark, AWS Athena, Postgres and Redis. Excellent SQL development, query optimization and data pipeline development skills required. Strong experience using public cloud platforms including AWS and GCP is required; experience with docker More ❯
London, England, United Kingdom Hybrid / WFH Options
Zenobe Energy Ltd
Excellent professional communication, reporting and presentation skills Excel proficiency Desirable but non-essential skills: Additional software engineering, cloud platform & data skills – (e.g. Linux, SQL, AWS (s3, CDK, Lambda, Glue, Athena), PySpark) Master’s degree, PhD or other additional qualification WORKING AT ZENOBE We’re passionate about sustainability and are proud to offer Team Zenobē a pioneering and collaborative working More ❯
Amazon Last Mile - Routing and Planning DE Design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. • Extract huge volumes of structured and unstructured data from various sources (Relational/Non-relational/No-SQL database) and message streams … support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. More ❯
Atlanta, Georgia, United States Hybrid / WFH Options
Pyramid Consulting Inc
and regulations. Optimize data storage, processing, and retrieval mechanisms for performance and cost-efficiency. Automate deployment, monitoring, and maintenance tasks using AWS services such as AWS Glue, AWS Lambda, Amazon EMR, etc. Conduct performance tuning and optimization of data processing workflows to ensure high availability and reliability. Stay up-to-date with the latest AWS technologies and trends in … Proven experience as a Data Engineer, Data Architect, or similar role with a focus on AWS technologies. In-depth knowledge of AWS services such as S3, Glue, EMR, Redshift, Athena, Kinesis, etc. Strong programming skills in languages such as Python, Scala, or Java. Hands-on experience with big data processing frameworks like Apache Spark, Apache Hadoop, etc. Experience with More ❯
and value. Role Overview The Principal Data Architect will design and implement solutions using a range of AWS infrastructure, including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, RDS, Athena, Kinesis, Quicksight. We also widely use ‘other’ tech such as Snowflake, DBT, Databricks, Informatica, Matillion, Airflow, Tableau, Power BI etc. Transformational Leadership: Lead and guide the development of cloud … ML) Designing & developing data models aligned to the functional and non-functional requirements Work closely with other members of agile deployment team Selection & configuration of appropriate, base technologies (eg Amazon Redshift, RDS) Selection & application of appropriate standards & principles Capture & implementation of functional & non-functional requirements Expert in data modelling and latest data trends including Data Warehouses, Data Lakes, 3NF More ❯
with data privacy regulations. Technical Competencies The role is a hands-on technical leadership role with advanced experience in at least most of the following technologies Cloud Platforms: AWS (Amazon Web Services): Knowledge of services like S3, EC2, Lambda, RDS, Redshift, EMR, SageMaker, Glue, and Kinesis. Azure: Proficiency in services like Azure Blob Storage, Azure Data Lake, VMs, Azure … Lake Formation, Azure Purview. Data Security Tools: AWS Key Management Service (KMS), Azure Key Vault. Data Analytics & BI: Visualization Tools: Tableau, Power BI, Looker, and Grafana. Analytics Services: AWS Athena, Amazon QuickSight, Azure Stream Analytics. Development & Collaboration Tools: Version Control: Git (and platforms like GitHub, GitLab). CI/CD Tools: Jenkins, Travis CI, AWS CodePipeline, Azure DevOps. More ❯