you: 5+ years of experience in software development, with a strong focus on backend technologies and building distributed services. Proficiency in one or more programming languages including Java, Python, Scala or Golang. Experience with columnar, analytical cloud data warehouses (e.g., BigQuery, Snowflake, Redshift) and data processing frameworks like Apache Spark is essential. Experience with cloud platforms like AWS, Azure, or More ❯
3+ years of experience in Framework development and building integration Layers to solve complex business use cases. Technical Skills Strong coding skills in one or more programming languages - Python, Scala, Spark or Java Experience in working with petabyte scale data sets and developing integration layer solutions in Databricks, Snowflake or similar large platforms. Experience with cloud-based data warehousing, transformation More ❯
such as Hadoop, Spark, and Kafka, with a strong emphasis on Java development. Proficiency in data modeling, ETL processes, and data warehousing concepts. Experience with data processing languages like Scala, Python, or SQL. Familiarity with containerization technologies (Docker) and orchestration tools (Kubernetes). Strong knowledge of software development principles, including object-oriented design, design patterns, and clean code practices. Excellent More ❯
We'd love to hear from you if you Thrive in a diverse, open and collaborative environment where impact is as valuable as technical skill. Have proficient knowledge of Scala and the JVM ecosystem. Possess familiarity of functional programming paradigms and a willingness to adopt other languages (not only JVM languages). Have consistent background in software development in high More ❯
data processing and REST services to distribute data across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and governance platform Programming Languages: Java, Scala, Scripting Microservice Technologies: REST, Spring Boot, Jersey Build and CI/CD Technologies: Gradle, Jenkins, GitLab, SVN Cloud Technologies: AWS CDK, Docker, Kubernetes (plus) HOW YOU WILL FULFILL YOUR POTENTIAL … with new technologies. SKILLS AND EXPERIENCE WE ARE LOOKING FOR Bachelor's degree in Computer Science, Mathematics, Engineering, or related field. 3+ years of hands-on experience with Java, Scala, Scripting, REST, Spring Boot, Jersey. Goldman Sachs is committed to fostering diversity and inclusion, offering comprehensive benefits including health, wellness, retirement, and family support programs. We are an equal opportunity More ❯
Azure Event Hubs ) Solid understanding of SQL , data modelling , and lakehouse architecture Experience deploying via CI/CD tools (e.g., Azure DevOps, GitHub Actions) Nice to Have: Knowledge of Scala/Java Understanding of GDPR and handling sensitive data This is a contract role (UK-based) offering the chance to work on high-impact projects shaping the future of finance More ❯
services to distribute data to different consumers across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and data governance platform Programming Languages: Java, Scala, Scripting Database Technologies: MongoDB, ElasticSearch, Cassandra, MemSQL, Sybase IQ/ASE Micro Service Technologies: REST, Spring Boot, Jersey Build and CI/CD Technologies: Gradle, Jenkins, Gitlabs, SVN Cloud Technologies … in code review and collaborate in experimenting new tech stacks SKILLS AND EXPERIENCE WE ARE LOOKING FOR Computer Science, Mathematics, Engineering or other related degree at bachelors level Java, Scala, Scripting, REST, Spring Boot, Jersey Kafka, Spark, Hadoop, MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE 3+ years of hands-on experience on relevant technologies ABOUT GOLDMAN SACHS At Goldman Sachs More ❯
Proven track record in Data Engineering and supporting the business to gain true insight from data. Experience in data integration and modelling including ELT pipelines. Strong SQL, Python/Scala/R; experience with ELT pipelines and Power BI Solid knowledge of data Lakehouse architecture and Azure services Insurance or MGA data experience preferred Strong communication, stakeholder engagement, and problem More ❯
experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Knowledge of AWS Infrastructure - Knowledge of software engineering best practices across the development life cycle, including agile methodologies, coding standards More ❯
and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business - Design, develop, and More ❯
control, task tracking). ·Demonstrable experience writing ETL scripts and code to make sure the ETL processes perform optimally. ·Experience in other programming languages for data manipulation (e.g., Python, Scala). ·Extensive experience of data engineering and the development of data ingest and transformation routines and services using modern, cloud-based approaches and technologies. ·Understanding of the principles of data More ❯
education) in a STEM discipline. Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate. Hands on Experience in Java , Spark , Scala ( or Java) Production scale hands-on Experience to write Data pipelines using Spark/any other distributed real time/batch processing. Strong skill set in SQL/Databases Strong More ❯
project deadlines. Strong collaborative spirit, working seamlessly with team members and external clients. Proficiency in R or Python. Solid understanding of SQL; experience working with Spark (Java, Python, or Scala variants) and cloud platforms like Databricks is a plus. Strong statistical knowledge, including hypothesis testing, confidence intervals, and A/B testing. Ability to understand and communicate the commercial impact More ❯
practice. We are looking for experience in the following skills: Relevant work experience in data science, machine learning, and business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. More ❯
Qualification We are looking for experience in the following skills: Relevant work experience in data science, machine learning, and business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. More ❯
the System Delivery Life Cycle. Experience of using agile delivery tools such as JIRA, Pivotal, Collab, Confluence Experience of engineering based on the likes of SQL, SSIS, Python, Java, Scala, XML/FpML and Power BI Data architecture, data lineage and all aspects of AI including, but not limited to, NLP, ML, deep learning and Generative AI Testing/quality More ❯
their favourite shows is a challenge that we relish. This means scaling, and reliability is our primary focus in everything we build. The User Services Teams: We are functional Scala enthusiasts ( Cats/Scalaz/ZIO/shapeless etc) that care about following best practice. We're responsible for things like registration, login and authentication, profiles and personalisation and compliance … experienced advocates of functional programming so you can expect to join a team that is applying principles from FP, Reactive Programming and Distributed Computing to build these services, using Scala, Akka, Kafka, Play and Cats, as well a wide range of cloud-native technologies including AWS (Kinesis, DynamoDB, Lambda), Docker and Serverless. We have a mature DevOps culture in place … where the team is responsible for the infrastructure and deployment of those applications - "You build it, you run it." What you will do: You will be using Scala , Akka , Kafka , Kinesis , and Dynamo to build and innovate our software that is distributed, reactive, and scalable. You will: Lead a product engineering team, ultimately responsible for the delivery of that product More ❯
build, and maintain scalable, fault-tolerant systems that support petabyte-scale workloads in production • Lead development of RESTful APIs using Java or similar high-performance languages (e.g., Go, Kotlin, Scala) • Contribute to the evolution of our internal platform by improving core infrastructure components and abstractions • Design and optimize data pipelines and backend systems leveraging distributed technologies like Apache Kafka and … Proven track record of designing and operating petabyte-scale systems in production environments • Deep experience with Java and working knowledge of other programming languages such as Python, Go or Scala • Strong understanding and hands-on experience with RESTful service design and implementation • Solid background in distributed systems and experience with technologies like Kafka, Cassandra, or equivalents • Proficiency with Kubernetes (K8s More ❯
Burton-on-Trent, Staffordshire, England, United Kingdom Hybrid / WFH Options
Crimson
using Azure. Stay current with Azure technologies and identify areas for enhancement. Databricks (Unity Catalog, DLT), Data Factory, Synapse, Data Lake, Stream Analytics, Event Hubs. Strong knowledge of Python, Scala, C#, .NET. Experience with advanced SQL, T-SQL, relational databases. Azure DevOps, Terraform, BICEP, ARM templates. Distributed computing, cloud-native design patterns. Data modelling, metadata management, data quality, data as More ❯
of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Knowledge of AWS Infrastructure - Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets More ❯
Excellent problem-solving skills and a collaborative mindset Agile development experience in a team setting Bonus Skills (nice to have) Experience with big data tools like Hadoop, Spark, or Scala Exposure to fraud, payments , or financial services platforms Understanding of cloud-native development and container orchestration Knowledge of test-driven development and modern code quality practices What's on Offer More ❯
data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large More ❯
on products and services within their specialty domain including expertise that extends beyond the Rackspace portfolio. Knowledge modern software development practices with exposure to any language, Java, C#, Node.js, Scala, Python or GO preferred. Knowledge of building containerised applications and systems on modern container orchestration platforms. Knowledge of CI/CD pipelines with respect to serverless and container application preferred. More ❯
years of software development with minimum 2 years in Ops or DevOps experience in trading environments, with expertise in large-scale distributed systems; recent work in Java, Go, or Scala is preferred. Hands-on experience with container orchestration (Kubernetes, Docker, etc.) and cloud infrastructure, especially AWS; familiarity with Infrastructure-as-Code tools like Terraform or CloudFormation. Strong CI/CD More ❯
practice We are looking for experience in the following skills: Relevant work experience in data science, machine learning, and business analytics Practical experience in coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. More ❯