and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using … 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with data governance, security, and compliance More ❯
and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using … 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with data governance, security, and compliance More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
KO2 Embedded Recruitment Solutions LTD
work with: Linux distributions: Debian, Ubuntu, Red Hat Enterprise Linux Web stacks: Apache, Nginx, MySQL, PostgreSQL, PHP, Python Networking: Static/dynamic routing, DNS, VPNs, and firewalls Containers & automation: Docker, Kubernetes, and CI/CD pipelines Cloud platforms: AWS, Azure, and Google Cloud Infrastructure: High-availability clusters, Pacemaker, filesystem replication, hybrid cloud environments, remote desktops Internal tools: Request Tracker (RT … Linux system administration and automation Strong background in web and database technologies in production environments Solid understanding of networking concepts and security best practices Practical experience with container technologies (Docker, Kubernetes) Familiarity with one or more cloud platforms (AWS, Azure, GCP) Confidence in diagnosing and solving infrastructure problems Ability to communicate technical solutions clearly to both technical and non-technical More ❯
and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using … 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with data governance, security, and compliance More ❯
and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using … 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with data governance, security, and compliance More ❯
london (city of london), south east england, united kingdom
HCLTech
and lead the development of scalable data pipelines and platforms on AWS. The ideal candidate will have deep expertise in PySpark, Glue, Athena, AWS LakeFormation, data modelling, DBT, Airflow, Docker and will be responsible for driving best practices in data engineering, governance, and DevOps. Key Responsibilities: • Lead the design and implementation of scalable, secure, and high-performance data pipelines using … 10+ years of experience in data engineering. • Strong hands-on experience with AWS services: S3, Glue, Lake Formation, Athena, Redshift, Lambda, IAM, CloudWatch. • Proficiency in PySpark, Python, DBT, Airflow, Docker and SQL. • Deep understanding of data modeling techniques and best practices. • Experience with CI/CD tools and version control systems like Git. • Familiarity with data governance, security, and compliance More ❯
as well as RHEL and SLES operating systems. • Knowledge of MS database server SQL Server 2008 R2 and later. • Knowledge of container technologies including but not limited to Docker, Kubernetes, and Rancher • Knowledge of IaaS in cloud using MS Azure, AWS, or GCP • Experience with hardening and improving the security of Windows operating systems and hardware platforms, local area networks … VMware, Nutanix, or similar Certification e.g., VCP, NCP, etc. • Strong background in server virtualization e.g., VMware, Nutanix, OpenShift, Harvester, Proxmox, MS Hyper-V, Citrix, Azure, AWS, etc. • Experience with Docker and Kubernetes • Experience with Terraform or Ansible • Experience with Dell PowerFlex • Experience working in a large enterprise environment • Knowledge of storage area networks to include network operating systems, SAN protocols More ❯
Java Backend Development - Skills with functional Java (versions 8+) and Spring (ideally Springboot) - Agile ways of working such as Scrum or Kanban in cross-functional teams - Some knowledge of Docker, EKS, AWS (public cloud) or Kafka Energy and Experience: A growth mindset that is curious and passionate about technologies and enjoys challenging projects on a global scale Challenge the Status … beyond traditional solutions Builder: Experience building and deploying modern services and web applications with quality and scalability Learner: Constant drive to learn new technologies such as Angular, React, Kubernetes, Docker, etc. Partnership: Experience collaborating with Product, Test, Dev-ops, and Agile/Scrum teams Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race More ❯
A degree in Science, Technology, Engineering, or Mathematics (STEM) and a minimum of 2 years of Software Engineering experience. Experience with Java, Git, and Bash shell scripting. Experience with Docker, Helm, and Kubernetes for development and management of containerized applications. Experience with AWS Cloud Technologies. An active and transferable TS/SCI security clearance is required prior to the start … and managing scripts that interact with container systems. Experience with writing Bash shell scripts for automating tasks related to containerization. Experience with container technology, including creating, managing, and optimizing Docker containers. Experience with container orchestration, including deploying, scaling, and managing containers in a Kubernetes cluster. Experience with EC2, EKS, S3, and other cloud services to support containerized environments. Strong troubleshooting More ❯
1. Manage the end-to-end development, design, and maintenance of robust, scalable, and high availability architecture and software solutions using NodeJS, NestJS, PostgreSQL, AWS Lambda, Docker, Redis and other related technologies. 2. Optimize system architecture and website performance by collecting and interpreting data, implementing A/B tests, troubleshooting issues, and staying abreast of industry trends to contribute innovative … key decisions on front-end architecture and tooling. b. Building out new UI flows using React and Typescript. c. Implementing front-end tests using Jest and React. d. AWS, Docker, Redis, AWS Lambda, and Javascript. e. Collaborating with the engineering team on prototype development. f. Performing quality control and resolving structural design issues throughout the implementation stages. g. Collecting and More ❯
knowledge of machine learning algorithms (supervised, unsupervised, deep learning), including model evaluation, explainability, and selection for business-critical use cases Strong hands-on experience with cloud infrastructure (AWS), containerization (Docker), and orchestration (Jenkins, Airflow) Proven capability in MLOps, including CI/CD pipelines, model monitoring, versioning, and automated retraining Experience deploying and serving models through APIs (e.g., Flask, FastAPI) in … can also use privately; our super-friendly Employee Tech team will ensure that your tech needs are always taken care of Modern, cloud-native tooling and infrastructure (including AWS, Docker, and GitHub Actions) that enable fast, scalable experimentation and deployment JobRad and Urban Sports Club Regular virtual and in-person team and company events to have fun, share, and celebrate More ❯
Why you should apply for a job to JPMorgan Chase: 61% say women are treated fairly and equally to men 77% say the CEO supports gender diversity Ratings are based on anonymous reviews by Fairygodboss members. Position summary nd high More ❯
IT Depot has an opportunity for a Cloud Solutions Architect - Mid that wants to make a difference in the defense of the nation by applying their talents and experience in a fast-paced dynamic environment in support of the Federal More ❯
Washington, Washington DC, United States Hybrid / WFH Options
Davis Laine, LLC
through the Authority to Operate (ATO) process on classified networks. Requirements 6-10 years of experience. Experience and knowledge of web application development using Python and Django. Experience using Docker/Docker Compose. Knowledge and experience of classified government networks and Amazon Web Services (AWS) including RDS Aurora, and ECS/EKS, VPC, ALB Linux/UNIX Familiarity with Government More ❯
Fairfax, Virginia, United States Hybrid / WFH Options
Synertex LLC
through the Authority to Operate (ATO) process on classified networks. REQUIREMENTS: 6-10 years of experience. Experience and knowledge of web application development using Python and Django. Experience using Docker/Docker Compose. Knowledge and experience of classified government networks and Amazon Web Services (AWS) including RDS Aurora, and ECS/EKS, VPC, ALB Linux/UNIX Familiarity with Government More ❯
must be performed 100% onsite, however flexible work schedules are permitted. 5 years and related BS degree or 3 years and MS. Desired Skills and Qualifications: Angular 2+ Kubernetes Docker Continuous Integration and Deployment Tools: Git, Jenkins, Puppet, Docker Agile SW development methodologies SAFe processes JEE/J2EE RESTfull and/or SOAP Web Services JSON, XML Pay Rate: In More ❯
developer with a deep understanding of secure AI/ML pipelines and modern DevSecOps practices. Key Responsibilities: Design, develop, and deploy RAG-based AI applications Implement containerized services using Docker, Podman, or containerd Deploy and maintain services under Kubernetes or Docker Compose Work with LLMs, embedding models, and knowledge retrieval frameworks Develop in Python and Golang Manage CI/CD More ❯
assist with the maintenance of existing legacy systems and support the transition to a cloud-native architecture. This role requires expertise in CI/CD automation, container orchestration using Docker and Kubernetes, Linux system administration, and infrastructure-as-code in cloud-agnostic environments. The position will require an additional security scrub prior to onboarding. Tasks Performed: • Automate build, test, and … deployment processes using CI/CD tools. • Deploy and monitor applications using Docker and Kubernetes. • Maintain legacy systems and assist in migrating to cloud-native architecture. • Perform system updates, scripting, and performance tuning on Linux servers. • Provision and manage infrastructure using infrastructure-as-code tools in cloudagnostic environments. • Collaborate in teams and manage code with Git. • Apply security best practices … Education, Skills and Qualifications: • Demonstrated 5+ years of DevOps experience. • Demonstrated 3+ years of Continuous Integration/Continuous Development (CI/CD) experience. • Demonstrated 3+ years of Kubernetes and Docker experience. • Demonstrated experience in supporting cloud agnostic environments. • Demonstrated experience in integrating automatic test tools into the CI/CD pipeline. • Demonstrated experience with Linux Operating Systems. • Experience developing, implementing More ❯
or maintaining GitLab CI/CD pipeline code Beyond AWS Lambda, his experience with AWS Serverless architectures (e.g. API gateway, load balancing, autoscaling) Configuring, troubleshooting, and/or deploying docker containers Python/scripting experience, especially with boto3 (AWS). Desired experience: Deployment of microservices via a DevSecOps pipeline within an architecture in which a single instance of a software … application serves multiple customers Develop, update, and maintain GitLab CI/CD pipeline code Configure, troubleshoot, and deploy docker containers Review code within Git and extract/create engineering documentation as needed Support Developer deployments Provide technical guidance for all capabilities/microservices Active Top Secret with the ability to obtain a TS/SCI poly clearance Linux DevOps background More ❯
an instantly recognisable consultancy who require a C# .Net Developer with hands-on experience with Apache Kafka and Confluent Kafka, and a strong background in cloud-native development using Docker and Azure.Key Requirements: Proven experience as a C# .Net Developer Strong experience in building and managing Kafka clusters and topics. Recent experience developing Microservice based solutions Strong knowledge of version … similar) Experience working with structured data formats and APIs (JSON Payloads). Knowledge of schema evolution and validation practices (JSON Schema Registry). Ability to containerize applications and manage Docker images. Experience deploying and managing services in Azure (e.g., AKS, Azure Functions, Event Hubs). Nice to have: Immediate availability Familiarity with Confluent Platform tools including Schema Registry, Kafka Connect More ❯
Tech stack AWS (Core services - EC2, S3, IAM, etc.) Kubernetes (building and managing production clusters) Terraform (for full IaC provisioning) Python (scripting, automation) GitHub Actions (CI/CD pipelines) Docker & Helm (for containerised app deployments) What They’re Looking For Strong experience in AWS cloud infrastructure (ideally in a regulated or high-traffic environment) Hands-on Kubernetes know-how, specifically … with EKS. Solid IaC experience with Terraform or CloudFormation. Comfortable scripting in Python at a decent level. Experience with containerisation (Docker, Helm) and CI/CD (GitHub Actions or similar) A good communicator who enjoys working collaboratively across product and engineering The client is willing to take someone that doesn't have all the skills and provide an environment to More ❯
applications, with plenty of opportunity to influence technical direction and drive improvement. Key Responsibilities Build and manage robust infrastructure using AWS services Deploy, maintain, and scale containerised applications using Docker and Kubernetes Automate infrastructure provisioning and updates using Terraform Manage message queuing systems with RabbitMQ and AWS SQS Implement system monitoring, logging, and alerting to ensure reliability and performance Support … across cloud platforms Continuously evaluate and optimise infrastructure for performance and cost Key Skills & Experience Proven experience in a DevOps or Cloud Engineering role Hands-on expertise with AWS, Docker, Kubernetes, and Terraform Strong understanding of CI/CD tools and modern development workflows Experience implementing monitoring and observability tools (e.g., Prometheus, Grafana) Solid grasp of cloud security, IAM, and More ❯
the integration of large language models (LLMs) into data workflows. • Deploy and maintain applications within the AWS cloud environment, leveraging native development tools and services. • Implement containerized solutions using Docker and Kubernetes for consistent development and deployment. • Utilize ElasticSearch for efficient data indexing and retrieval across complex datasets. • Collaborate with data scientists, analysts, and other developers to ensure seamless integration … on JavaScript, Angular, and API integration. • Proficiency in developing and deploying applications in the AWS cloud environment using native development tools. • Hands-on experience with containerization technologies such as Docker and Kubernetes. • Strong working knowledge of ElasticSearch for data indexing, querying, and integration. • Demonstrated experience in AI/ML, including AI modeling with Large Language Models (LLMs) for advanced data More ❯
with Azure s native AI stack (including AI Foundry, Search, Cosmos DB, and AKS), strong Python and C# skills, VS Code (ideally with AI tools like Copilot), familiarity with Docker, Kubernetes, and scalable cloud-native architecture, and a track record of working in agile teams using Azure DevOps for CI/CD and automated testing. Who we are: At Neologik … we re building the foundation for smarter, faster, and more adaptable enterprises. Key Responsibilities: • Build and deploy AI-first features using Azure AI Foundry, Azure AI Search, Cosmos DB, Docker, and AKS • Design and implement scalable architecture with cloud-native tools alongside a collaborative, fast-moving team • Code primarily in Python and C# - with bonus points if you can jump … focus on writing clean, maintainable, and well-structured code • Proficiency in Visual Studio Code, ideally with tools like GitHub Copilot or other AI-assisted development extensions • Solid understanding of Docker, Kubernetes, and modern cloud-native architectural patterns • Hands-on experience working in agile teams, with deep familiarity in CI/CD workflows, branching strategies, pull requests, and test automation using More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
RecruitmentRevolution.com
with Azure s native AI stack (including AI Foundry, Search, Cosmos DB, and AKS), strong Python and C# skills, VS Code (ideally with AI tools like Copilot), familiarity with Docker, Kubernetes, and scalable cloud-native architecture, and a track record of working in agile teams using Azure DevOps for CI/CD and automated testing. Who we are: At Neologik … we re building the foundation for smarter, faster, and more adaptable enterprises. Key Responsibilities: • Build and deploy AI-first features using Azure AI Foundry, Azure AI Search, Cosmos DB, Docker, and AKS • Design and implement scalable architecture with cloud-native tools alongside a collaborative, fast-moving team • Code primarily in Python and C# - with bonus points if you can jump … focus on writing clean, maintainable, and well-structured code • Proficiency in Visual Studio Code, ideally with tools like GitHub Copilot or other AI-assisted development extensions • Solid understanding of Docker, Kubernetes, and modern cloud-native architectural patterns • Hands-on experience working in agile teams, with deep familiarity in CI/CD workflows, branching strategies, pull requests, and test automation using More ❯