testing Proficiency in at least one programming language and be familiar with multiple programming paradigms (e.g. object-oriented, functional, procedural) Good understanding of cloud deployment Proficiency with Git Intermediate Python, including relevant libraries for data engineering and backend API development (i.e., FastAPI, Pydantic, etc) Intermediate Docker, basic understanding of Docker storage and Volumes, linux permissions Effective communication skills through written More ❯
architects, and DevOps teams to deliver robust streaming solutions. Required: • Hands-on experience with Apache Kafka (any distribution: open-source, Confluent, Cloudera, AWS MSK, etc.) • Strong proficiency in Java, Python, or Scala • Solid understanding of event-driven architecture and data streaming patterns • Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure • Familiarity with Docker, Kubernetes, and CI More ❯
outcomes About You Requirements Minimum of 5 years' experience in DevOps and Site Reliability Engineering within enterprise-scale environments Proficiency in full-stack development using technologies such as Java, Python, JavaScript/TypeScript, and frameworks like React, Angular, or Node.js Deep hands-on experience with cloud platforms (AWS, Azure, or GCP), infrastructure-as-code tools (Terraform, CloudFormation), and container orchestration More ❯
concepts in own job family/job discipline Strong understanding of developing cloud native modern applications and software development practices, with exposure to any language (Java, C#, Node.js, Scala, Python or GO preferred) Maintains an in-depth knowledge of one or more Cloud Hyperscaler (ie AWS, Azure, GCP) Knowledge of building containerised applications and systems on modern container orchestration platforms More ❯
cloud-based data platforms (e.g.: AWS, Azure, GCP). Experience with using Git for version control. Strong understanding of statistical analysis, machine learning, and predictive modelling techniques. Proficiency in Python and SQL programming languages. Proficiency with at least one cloud-based ML platform, such as SageMaker, Vertex AI or Azure Machine Learning Studio. Good knowledge of DevOps practices and tools More ❯
Experience with big data technologies such as Hadoop, Spark, or Hive. Familiarity with data warehousing and ETL tools such as Amazon Redshift, Google BigQuery, or Apache Airflow. Proficiency in Python and at least one other programming language such as Java, or Scala. Willingness to mentor more junior members of the team. Strong analytical and problem-solving skills with the ability More ❯
motivated teams, collaborating effectively and taking pride in your work. Strong problem-solving skills, viewing technology as a means to solve challenges. Proficiency in a programming language (e.g., Scala, Python, Java, C#) with understanding of domain modelling and application development. Knowledge of data management platforms (SQL, NoSQL, Spark/Databricks). Experience with modern engineering tools (Git, CI/CD More ❯
security, and Engineering forums - Lead stand-ups and scrums - Identify process and system improvements Essential Knowledge & Experience: - 10+ years in cloud infrastructure roles (hybrid/public) - Advanced knowledge of Python, Bash, Kubernetes, Docker - Proven leadership in incident resolution and performance - Experience of successfully leading a team Desirable Knowledge & Experience: - Experience in financial services or regulated environments is a plus We More ❯
delivering impact through data engineering, software development, or analytics Demonstrated success in launching and scaling technical products or platforms Strong programming skills in at least two of the following: Python, SQL, Java Commercial experience in client-facing projects is a plus, especially within multi-disciplinary teams Deep knowledge of database technologies: Distributed systems (e.g., Spark, Hadoop, EMR) RDBMS (e.g., SQL More ❯
is: Do you "use/consume" these technologies, or are you the one that "provides" them to the rest of the organization. What You'll Bring TECHNOLOGIES: Programming Languages: Python Experience with additional programming languages is a plus Additional info BCG offers a comprehensive benefits program, including medical, dental and vision coverage, telemedicine services, life, accident and disability insurance, parental More ❯
job! MSc or PhD, or equivalent experience in a quantitative field Deep theoretical knowledge of statistical methods and ML algorithms and their practical applications. Strong proficiency in SQL and Python, especially with core ML libraries (e.g. scikit-learn, XGBoost, SciPy, PyTorch) Extensive hands-on experience in taking advanced statistical/ML solutions from prototype to production and delivering high-impact More ❯
Terraform (Expert) Containerisation: Kubernetes, Docker CI/CD: GitHub Actions Observability: Grafana, Prometheus, AWS CloudWatch, OpenTelemetry/distributed tracing. Scripting: Strong proficiency in at least one scripting language (e.g., Python, Go, Bash). Familiarity with JavaScript/TypeScript is a plus, as it's used across our stack. Data Services: Operational knowledge of managing databases like RDS (Postgres/MySQL More ❯
a related security engineering role. Deep expertise in secure software development, secure coding practices, and OWASP Top 10/CWE 25. Strong technical proficiency in modern programming languages (e.g., Python, Java, JavaScript, Go, or C#). Experience with cloud-native security (AWS, Azure, GCP) and securing containerized environments (Docker, Kubernetes). Proficiency in security testing tools such as Burp Suite More ❯
monitoring, logging, and cost management. Knowledge of data security, compliance, and governance in Azure, including Azure Active Directory (AAD), RBAC, and encryption. Experience working with big data technologies (Spark, Python, Scala, SQL). Strong problem-solving and troubleshooting skills. Excellent communication skills with the ability to collaborate with cross-functional teams to understand requirements, data solutions, data models and mapping More ❯
level and we are looking for data engineers who have a variety of different skills which include some of the below. Strong proficiency in at least one programming language (Python, Java, or Scala) Extensive experience with cloud platforms (AWS, GCP, or Azure) Experience with: Data warehousing and lake architectures ETL/ELT pipeline development SQL and NoSQL databases Distributed computing More ❯
techniques to communicate insights and solution designs effectively. Knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and their AI/ML services. Basic understanding of programming languages like Python and R is beneficial. Education and Experience: Bachelor's or Master's, or equivalent degree in Computer Science, Data Science, Business Analytics, or a related field, or equivalent. Experience in More ❯
business problems. Ability to derive meaningful insights from structured and unstructured datasets of varying sizes. Track record of implementing impactful models that drive sustained business results. Proficiency in the Python data science tech stack (pandas, scikit-learn, NumPy, and visualisation libraries) Experience working in a Linux-based cloud environment (e.g. GCP, Azure, AWS). Experience using git version control. Communication More ❯
in Computer Science, Software Engineering, or a related field. Experience: Proven experience as a Software Engineer, with a strong portfolio of successful projects. Proficiency in programming languages such as Python, Java, C++, or similar. Experience with web development frameworks (e.g., React, Angular, Django, Flask). Knowledge of cloud platforms (AWS, Azure, Google Cloud) is a plus. Problem-Solving: Strong analytical More ❯
Strong hands-on experience with GitLab , Kubernetes , Docker/Containers , Ansible , Packer , Terraform , Linux variants and command line. Programming Skills: Proven ability in at least one language (e.g., JavaScript, Python, Java). Desirable: Experience with Grafana , Prometheus , Loki stack , Kubernetes certifications , web technologies , or AWS certifications. Benefits: Competitive salary, generous pension, private medical, flexible working, and professional development opportunities Rates More ❯
Hands-on experience with Infrastructure as Code (IaC) tools (Terraform, CloudFormation, ARM templates) Proficiency with container technologies like Docker and orchestration (Kubernetes, ECS, AKS, etc.) Strong scripting skills in Python, Bash, or PowerShell Experience with monitoring and logging tools (CloudWatch, Datadog, Prometheus, ELK stack, etc.) Familiarity with CI/CD tools (GitLab CI, Jenkins, GitHub Actions, etc.) The successful candidate More ❯
Hands-on experience with Infrastructure as Code (IaC) tools (Terraform, CloudFormation, ARM templates) Proficiency with container technologies like Docker and orchestration (Kubernetes, ECS, AKS, etc.) Strong scripting skills in Python, Bash, or PowerShell Experience with monitoring and logging tools (CloudWatch, Datadog, Prometheus, ELK stack, etc.) Familiarity with CI/CD tools (GitLab CI, Jenkins, GitHub Actions, etc.) The successful candidate More ❯
and reproducibility. Supporting the development of real-time and batch inference pipelines. Contributing to the scalability, efficiency, and reliability of ML infrastructure. KEY SKILLS AND REQUIREMENTS Strong experience in Python, Java, or Scala for backend development. Solid understanding of data processing and engineering workflows. Experience building APIs or services to support data or ML applications. Familiarity with ML model lifecycle More ❯
Docker, Kubernetes) Experience with cloud platforms (AWS, GCP, or Azure) Solid understanding of Linux systems administration Experience with Infrastructure as Code tools (Terraform, CloudFormation, etc.) Proficiency in scripting languages (Python, Bash, Go) Strong understanding of version control systems (Git) Experience with monitoring tools (Prometheus, Grafana, DataDog, or similar) Our cash compensation range for this role is $180,000 - $230,000. More ❯
skills in Infrastructure-as-Code (Terraform) - building, maintaining, and versioning reusable modules. - Experience with AWS and Azure services and their monitoring/integration strategies. - Scripting and automation skills using Python, PowerShell, Bash, etc. - Knowledge of REST APIs, automation pipelines, and CI/CD integration. - Proficiency in TypeScript (and JavaScript) to build extensions or automation tooling that integrates with engineering workflows. More ❯