BI tools Tableau, Oracle DV etc. - The work involves on premise and cloud platform environments, covering relational databases (PostreSQL, Oracle) and file stores (AWS S3). - Which of your skills will be used? - Building data warehouse solutions (using Oracle data warehouse tech) - Developing and enhancing ETL/ELT and more »
London 3 days a week, have a valid visa as we Are not able to sponsor Technical Stack:- Python, Postgres SQL, Azure Databricks, AWS(S3), Git, Azure DevOps CICD, Apache Airflow Skills years of experience in python scripting. in developing applications in Python language. to python-oriented Algorithm s … Requests etc. in SQL programming, Postgres SQL. knowledge on unit testing using PyTest on DevOps like CI/CD, Jenkins, Git. working with AWS(S3) and Azure Databricks. experience in delivering project with Agile and Scrum methodology. to co-ordinate with Teams across multiple locations and time zones interpersonal more »
key. Skills and experience we’re looking for: Good understanding and hands on experience with cloud technologies in AWS, EMR-FS, Glue Data Catalogue, S3, SQS, Lambda (AWS preferable). Good experience in modern OLAP technologies like Snowflake & Lake house implementation with AWS integrations. Good to have experience in more »
codebase and refactor as necessary to improve maintainability and efficiency. Implement event-driven design principles, utilizing AWS services such as SQS, SNS topics, and S3 for object storage. Develop and optimize MySQL queries to manage database interactions effectively. Write automated unit tests using PHPUnit to ensure code reliability and more »
on experience in shaping and designing data solutions (data analytics, data integration, data platform) using cloud-based hyperscales, with emphasis on AWS services including S3, Redshift, Lambda, Step Functions, DynamoDB, AWS Glue, RDS, Athena, Kinesis, Quicksight. Hands on experience on design solutions using Snowflake and DBT. Familiarity with Databricks more »
as AWS IAM SME AWS RBAC management and implementation; Azure Entra ID/IAM/RBAC management and implementation; Use of Cloud storage technologies - S3, blob storage; AWS and Azure cross platform logging and monitoring, syslog; Use of Microsoft Sentinel and use of Microsoft Defender; Use of established CI … Experience in AWS RBAC management and implementation; Experience in Azure Entra ID/IAM/RBAC management and implementation; Experience in Cloud storage technologies - S3, blob storage; Experience in AWS and Azure cross platform logging and monitoring, syslog; Experience in Microsoft Sentinel and Microsoft Defender; Experience in use of more »
Nottingham, Nottinghamshire, East Midlands, United Kingdom
Experian Ltd
AWS Cloud Development Kit. You'll use a variety of services such as SNS, SQS and Lambda for event streams, Amazon Redshift, S3 and AWS Glue for data storage & reporting and Amazon SageMaker for our pioneering machine learning products. We are a growing team, and … our products and the communication skills to work with our colleagues across ECS to help them use our services. Python andAWS Services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue and SageMaker Infrastructure-as-Code tools and approaches (we use the AWS CDK with CloudFormation) Data processing frameworks such more »
cloud environments. Strong knowledge of AWS services relevant to data architecture, such as Amazon Redshift, Amazon Athena, AmazonS3, AWS Glue, and AWS Lambda. Experience designing and implementing data lakes, data warehouses, and analytics solutions on AWS. Proficiency in data modeling, SQL, and more »
Linux system administration skills Experience with OpenTelemetry Experience of managingKubernetes cluster and containerisation AWS and IaC Terraform, CloudFormation, VPC, IAM, EC2, EKS, Lambda, RDS, S3, CloudWatch, puppet, docker Experience building and running monitoring infrastructure at a large scale. For example, Elasticsearch clusters, Prometheus, Kibana, Grafana, etc Web applications and more »
Science, Information Technology, or a related field (or equivalent experience). Proven experience working with AWS services and technologies, including EC2, ECS, EKS, ECR, S3, Lambda, EFS, DynamoDB, RDS, KMS, ELB, Cognito, CodeDeploy, and VPC. Proficiency in Python programming. Knowledge of OIDC (OpenID Connect) for authentication and authorization. Experience more »
regulations Infrastructure as Code, Ansible, Terraform and Containerisation, Docker, Kubernetes Experience with AWS and Azure cloud component and services integration (RDS/Azure DB, S3/Azure Blob) Knowledge of authentication and Biometric system design, implementation and standards e.g. FIDO, NIST,ITL, security token engineering (JSON Web Tokens JWT more »
large scale, robust production software in a fast-changing environment with rapid release cycles. Knowledge of Java, Cucumber DynamoDB, Redis and Redshift Cloud: AWS (S3, EC2, Lambda, AWS Glue/Spark, IAM, Cloudwatch, MSK, Managed Airflow, Athena, Kenesis) Experience of writing and taking responsibility for technical documentation. Knowledge and more »
Stanmore, England, United Kingdom Hybrid / WFH Options
Sky
and improving existing processes. Experience with monitoring and logging tools, such as Prometheus and Grafana. Desirable experience with AWS , including services such as EKS, S3, RDS, Lambda etc. to support our ongoing cloud migration. Bonus if you have experience and enjoy backend development , as we support contributions in this more »
others through mentoring and coaching. Additional Information Our Tech Stack - (experience with all of the below is not required ) AWS: Lambda, ECS, Dynamo DB, S3, Event Bridge, SQS, SNS, API Gateway, VPC, Security Hub, Control Tower and Cloud Trail Extra: CDK as infrastructure as Code, Snowflake, LaunchDarkly , Pagerduty Programming more »
backend systems for data processing at scale within AWS. In-depth knowledge and hands-on experience with foundational AWS services, such as; ALB, ECS, S3, ElastiCache, IAM, CloudWatch. Demonstrated expertise in implementing and maintaining Kafka-based event-driven services. Proficiency in both relational and NoSQL databases, with an understanding more »
technologies, such as Dremio, Snowflake, Iceberg, (Py)Spark/Glue/EMR, dbt and Airflow/Dagster. Experience with Cloud providers. Familiarity with AWS S3, ECS and EC2/Fargate would be considered particularly beneficial. Extensive experience with database technologies (SQL/NoSQL). Proven ability to work collaboratively more »
Spark, Apache Spark, Apache Druid, BigQuery and Redis Familiarity with cloud technologies. Ideally AWS and technologies such as EC2, ECS, EMR, AWS Lambda, DynamoDB, S3, Kinesis, SQS, SES, Cloudwatch.... Knowledge or experience with big data technologies such as Spark, Hadoop, Redshift, Snowflake, Kafka, Flink, Druid, Clickhouse... is highly desirable. more »
backend systems for data processing at scale within AWS. In-depth knowledge and hands-on experience with foundational AWS services, such as; ALB, ECS, S3, EFS, ElastiCache, IAM, CloudWatch. Strong skills in Infrastructure as Code (IaC) tools, such as Terraform or AWS CloudFormation. Hands-on experience with Jenkins and more »
with enterprise direction. Proven work experience as a Data Solution Architect or similar role in building data architecture, Data models, Pipeline using AWS services (S3, Lambda, Glue, Athena, QuickSight, etc.) or similar native Azure or GCP services. Strong experience on Snowflake or DataBricks (including PySpark) along with tools like more »
trading, fixed income, market risk Good knowledge of database design and data structures Proficiency with SQL-related technologies Knowledge of AWS such as ECS, S3, DynamoDB Evidence of exceptional analytical and problem-solving skills High attention to detail and ability to work on multiple fast-paced projects simultaneously Proactive more »
s Cross office) at least once a month. Work using managed, serverless services in AWS (Lambda, IoT/MQTT, API Gateway, MongoDB Atlas, DynamoDb, S3). Code using a Node.JS stack - mainly TypeScript, with some JavaScript and React, in an automated test-first environment. Work closely with product managers more »
validate datasets from a variety of global sources using Python, C, or C++ and storing resultant datasets both on-prem and in the cloud (S3) Engage with stakeholders across Optiver globally to build data solutions that meet the business needs Ensure that end-to-end data solutions are deployed more »
identifying the necessary edge cases that need to be tested in order to fully understand the data; programming in Python or Ruby, utilizing AWS S3, MongoDB, PostgreSQL, AWS Redshift or similar database technologies; using Jupyter notebooks and one or more statistical visualization or graphing toolkits such as Excel, Qlik more »