data pipelines to serve the easyJet analyst and data science community. Highly competent hands-on experience with relevant Data Engineering technologies, such as Databricks, Spark, Spark API, Python, SQL Server, Scala. Work with data scientists, machine learning engineers and DevOps engineers to develop, develop and deploy machine learning … development experience with Terraform or CloudFormation. Understanding of ML development workflow and knowledge of when and how to use dedicated hardware. Significant experience with ApacheSpark or any other distributed data programming frameworks (e.g. Flink, Hadoop, Beam) Familiarity with Databricks as a data and AI platform or the … data privacy, handling of sensitive data (e.g. GDPR) Experience in event-driven architecture, ingesting data in real time in a commercial production environment with Spark Streaming, Kafka, DLT or Beam. Understanding of the challenges faced in the design and development of a streaming data pipeline and the different options More ❯
Databricks Must Have Hands on experience on at least 2 Hyperscalers (GCP/AWS/Azure platforms) and specifically in Big Data processing services (ApacheSpark, Beam or equivalent). In-depth knowledge on key technologies like Big Query/Redshift/Synapse/Pub Sub/Kinesis … years’ experience in a similar role. Ability to lead and mentor the architects. Mandatory Skills [at least 2 Hyperscalers] GCP, AWS, Azure, Big data, Apachespark, beam on BigQuery/Redshift/Synapse, Pub Sub/Kinesis/MQ/Event Hubs, Kafka Dataflow/Airflow/ADF More ❯
communication skills and demonstrated ability to engage with business stakeholders and product teams. Experience in data modeling , data warehousing (e.g., Snowflake , AWS Glue , EMR , ApacheSpark ), and working with data pipelines . Leadership experience—whether technical mentorship, team leadership, or managing critical projects. Familiarity with Infrastructure as Code More ❯
/ELT tools, APIs, and integration platforms. Deep knowledge of data modelling, warehousing, and real time analytics. Familiarity with big data technologies principals (e.g., Spark, Hadoop) and BI tools (e.g., Power BI, Tableau). Strong programming skills (e.g. SQL, Python, Java, or similar languages). Ability to exercise a More ❯
Skills: x5 + experience with Python programming for data engineering tasks Strong proficiency in SQL and database management Hands-on experience with Databricks and ApacheSpark Familiarity with Azure cloud platform and related services Knowledge of data security best practices and compliance standards Excellent problem-solving and communication More ❯
ingest millions of data points daily, and develop highly available data processing and REST services for different PWM consumers. Technologies include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy Programming Languages: Java, Scala, Scripting Microservice Technologies: REST, Spring Boot, Jersey Build & CI/CD: Gradle, Jenkins, Gitlab, SVN Cloud Technologies More ❯
or SVN. Capable of presenting technical issues and successes to team members and Product Owners. Nice to Have Experience with any of these- Python, Spark, Kafka, Kinesis, Kinesis Analytics, BigQuery, Dataflow, BigTable, and SQL. Enthusiastic about learning and applying new technologies (growth mindset). Ability to build new solutions More ❯
build, operate, maintain, and support cloud infrastructure and data services. Skills to automate and optimize data engineering pipelines. Experience with big data technologies (Databricks, Spark). Development of custom security applications, APIs, AI/ML models, and advanced analytics technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP More ❯
Product Owners. Has experience of people management or desire to manage individuals on the team Nice to Have Experience with some of these- Python, Spark, Kafka, Kinesis, Kinesis Analytics, BigQuery, Dataflow, BigTable, and SQL. Enthusiastic about learning and applying new technologies (growth mindset). Ability to build new solutions More ❯
of different platforms. The data will be stored and transported securely while still able to be queried efficiently. Technologies used include: Data Technologies: Kafka, Spark, Debezium, GraphQL Programming Languages: Java, Scripting Database Technologies: MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE Micro Service Technologies: REST, Spring Boot, Jersey Build and More ❯
data architecture , including data modeling, warehousing, real-time and batch processing, and big data frameworks. Proficiency with modern data tools and technologies such as Spark, Databricks, Kafka, or Snowflake (bonus). Knowledge of cloud security, networking, and cost optimization as it relates to data platforms. Experience in total cost More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
software practices (SCRUM/Agile, microservices, containerization like Docker/Kubernetes). we'd also encourage you to apply if you possess: Experience with Spark/Databricks. Experience deploying ML models via APIs (e.g., Flask, Keras). Startup experience or familiarity with geospatial and financial data. The Interview Process More ❯
algorithms, model optimization, and deploying scalable solutions in cloud environments. Experience with version control (Git), Linux, Docker, and data engineering tools such as Hadoop, Spark, and Elasticsearch. Strong problem-solving, team collaboration, and communication skills. Business Development (either in a commercial environment or through ‘selling your ideas’) is a More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Ripjar
libraries such as PyTorch, scikit-learn, numpy and scipy Good communication and interpersonal skills Experience working with large-scale data processing systems such as Spark and Hadoop Experience in software development in agile environments and an understanding of the software development lifecycle Experience using or implementing ML Operations approaches More ❯
software engineering skills, understanding of MLOps practices and experience in managing data science projects. Proficiency in handling large datasets and using tools such as ApacheSpark, TensorFlow, PyTorch, Azure. Experience with Generative AI technologies such as GPT-3 and other LLMs and MLOps tools such as MLflow. Advanced More ❯
track record in networks, telecom, or customer experience domains (preferred). Proficiency in cloud platforms like GCP, AWS, or Azure; plus tools like Kafka, Spark, Snowflake, Databricks. Skilled collaborator with excellent stakeholder and vendor management capabilities. Confident communicator with the ability to bridge technical and business audiences. #J More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
career progression opportunities across the Group, including several high-profile household names. What you'll bring: Experience with cloud and big data technologies (e.g., Spark, Databricks, Delta Lake, BigQuery). Familiarity with eventing technologies (e.g., Event Hubs, Kafka) and file formats such as Parquet, Delta, or Iceberg. Interested in More ❯
technical issues to non-technical team members. Experience in analytics at app-based/Consumer Tech companies Familiarity with Big Data frameworks like Snowflake, Spark and AWS services Experience creating automated data tables using dbt and Airflow Knowledge of the Atlassian suite including Bitbucket, Confluence, and Jira Experience resolving More ❯
and create a highly available data processing and REST services to distribute data to different consumers across PWM. Technologies used include: Data Technologies: Kafka, Spark, Hadoop, Presto, Alloy - a data management and data governance platform Programming Languages: Java, Scala, Scripting Database Technologies: MongoDB, ElasticSearch, Cassandra, MemSQL, Sybase IQ/… EXPERIENCE WE ARE LOOKING FOR Computer Science, Mathematics, Engineering or other related degree at bachelors level Java, Scala, Scripting, REST, Spring Boot, Jersey Kafka, Spark, Hadoop, MongoDB, ElasticSearch, MemSQL, Sybase IQ/ASE 3+ years of hands-on experience on relevant technologies ABOUT GOLDMAN SACHS At Goldman Sachs, we More ❯
to join its innovative team. This role requires hands-on experience with machine learning techniques and proficiency in data manipulation libraries such as Pandas, Spark, and SQL. As a Data Scientist at PwC, you will work on cutting-edge projects, using data to drive strategic insights and business decisions. … e.g. Sklearn) and (Deep learning frameworks such as Pytorch and Tensorflow). Understanding of machine learning techniques. Experience with data manipulation libraries (e.g. Pandas, Spark, SQL). Git for version control. Cloud experience (we use Azure/GCP/AWS). Skills we'd also like to hear about More ❯