Cambridge, Cambridgeshire, United Kingdom Hybrid / WFH Options
Arm Limited
re solving for reliability, compliance, performance, and speed - at once. You'll be key to making it work. Required Skills: Knowledge of one or more programming languages (Java/Scala, TypeScript, Python). Validated experience operating distributed systems at scale in production. Cloud AWS (primary), Kubernetes (future), Docker (current), Terraform. Excellent debugging skills across network, systems, and data stack. Observability More ❯
security). Working knowledge of AWS core services, including S3, EC2/EMR, IAM, Athena, Glue or Redshift. Hands-on experience with Databricks Spark on large datasets, using PySpark, Scala, or SQL. Familiarity with Delta Lake, Unity Catalog or similar data lakehouse technologies. Proficient in Linux environments, including experience with shell scripting, basic system operations, and navigating file systems. Deep More ❯
on Unix/Linux systems Familiarity with procedural languages (e.g., C, C++, C#, Java) and scripting languages (e.g., Python) Interest or experience in functional programming (e.g., OCaml, Haskell, F#, Scala, ML) Understanding of software engineering best practices including automated testing, code review, and CI/CD A thoughtful approach to building scalable, maintainable, and correct systems Preferred qualifications Bachelor's More ❯
Skills Proficiency in object-oriented programming languages (e.g., C++, C#, Java) and scripting languages (e.g., Python). Additional Skills Interest or experience in functional programming (e.g., OCaml, Haskell, F#, Scala, ML). Software Engineering Best Practices Understanding of software engineering best practices including automated testing, code review, and CI/CD. Approach A thoughtful approach More ❯
Key job responsibilities • Design, implement, and support data warehouse/data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc. • Extract huge volumes of structured and unstructured data from various sources (Relational/Non-relational/No-SQL database) and message streams and construct complex analyses. • Develop and … experience - Experience with data modeling, warehousing and building ETL pipelines - 4+ years of SQL experience - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large More ❯