London, England, United Kingdom Hybrid / WFH Options
Endava Limited
delivering high-quality solutions aligned with business objectives. Key Responsibilities Architect, implement, and maintain real-time and batch data pipelines to handle large datasets efficiently. Employ frameworks such as Apache Spark, Databricks, Snowflake, or Airflow to automate ingestion, transformation, and delivery. Data Integration & Transformation Work with Data Analysts to understand source-to-target mappings and quality requirements. Build ETL … security measures (RBAC, encryption) and ensure regulatory compliance (GDPR). Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: Apache Spark, Hadoop, Databricks, Snowflake, etc. Data Modelling: Designing dimensional, relational, and hierarchical data models. Scalability & Performance: Building fault-tolerant, highly available data architectures. Security & Compliance: Enforcing role-based access More ❯
learning systems at scale You have experience architecting data pipelines and are self-sufficient in getting the data you need to build and evaluate models, using tools like Dataflow, Apache Beam, or Spark You care about agile software processes, data-driven development, reliability, and disciplined experimentation You have experience and passion for fostering collaborative teams Experience with TensorFlow, pyTorch … and/or other scalable Machine learning frameworks Experience with building data pipelines and getting the data you need to build and evaluate your models, using tools like Apache Beam/Spark Where You'll Be We offer you the flexibility to work where you work best! For this role, you can be within the EMEA region as long More ❯
relational and NoSQL databases. Experience with data modelling. General understanding of data architectures and event-driven architectures. Proficient in SQL. Familiarity with one scripting language, preferably Python. Experience with Apache Airflow & Apache Spark. Solid understanding of cloud data services: AWS services such as S3, Athena, EC2, RedShift, EMR (Elastic MapReduce), EKS, RDS (Relational Database Services) and Lambda. Nice More ❯
of financial products. Hard-working, intellectually curious, and team-oriented. Strong communication skills. Experience with options trading or options data is a strong plus. Experience with technologies like KDB, Apache Iceberg, and Lake Formation will be a meaningful differentiator. #J-18808-Ljbffr More ❯
code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy to More ❯
code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy to More ❯
code Experience working on distributed systems Strong knowledge of Kubernetes and Kafka Experience with Git, and Deployment Pipelines Having worked with at least one of the following stacks: Hadoop, Apache Spark, Presto AWS Redshift, Azure Synapse or Google BigQuery Experience profiling performance issues in database systems Ability to learn and/or adapt quickly to complex issues Happy to More ❯
City Of London, England, United Kingdom Hybrid / WFH Options
McGregor Boyall
term changes, with a key focus on public cloud onboarding. The platform is a Greenfield build using modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premises and in AWS utilising technologies such as EKS, S3, FSX. Objectives Steering platform onboarding into AWS and Google More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
term changes, with a key focus on public cloud onboarding. The platform is a Greenfield build using modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. The platform runs in a hybrid mode both on-premises and in AWS utilising technologies such as EKS, S3, FSX. Objectives Steering platform onboarding into AWS and Google More ❯
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II, Core More ❯
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II, Core More ❯
. Skilled Data Engineer for Cloud Data Lake activities. The candidate should have industry experience (preferably in Financial Services) in navigating enterprise Cloud applications using distributed computing frameworks as Apache Spark, Hadoop, Hive. Working knowledgeoptimizing database performance, scalability, ensuring data security and compliance. Education & Preferred Qualifications Bachelor's/Master's Degree in a Computer Science, Engineering or Math More ❯
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II, Core More ❯
London, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
term changes, with a key focus on public cloud onboarding. The platform is a Greenfield build using modern technologies such as Java, Spring Boot, Kubernetes, Kafka, MongoDB, RabbitMQ, Solace, Apache Ignite. It runs in a hybrid mode both on-premises and in AWS, utilizing technologies such as EKS, S3, FSX. Objectives Steering platform onboarding into AWS and Google Cloud More ❯
Salisbury, Wiltshire, South West, United Kingdom Hybrid / WFH Options
Anson Mccade
modern data lake/lakehouse architectures Strong grasp of cloud data platforms (AWS, Azure, GCP, Snowflake) Understanding of Data Mesh , Data Fabric , and data product-centric approaches Familiarity with Apache Spark , Python , and ETL/ELT pipelines Strong knowledge of data governance, lifecycle management, and compliance (e.g. GDPR) Consulting experience delivering custom data solutions across sectors Excellent leadership, communication More ❯
London, England, United Kingdom Hybrid / WFH Options
Experis - ManpowerGroup
ability to explain complex data concepts to non-technical stakeholders. Preferred Skills: Experience with insurance platforms such as Guidewire, Duck Creek, or legacy PAS systems. Knowledge of Delta Lake, Apache Spark, and data pipeline orchestration tools. Exposure to Agile delivery methodologies and tools like JIRA, Confluence, or Azure DevOps. Understanding of regulatory data requirements such as Solvency II, Core More ❯
SQL, and Python. Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DBT expertise Terraform experience Expert SQL and Python Data Modelling Data Vault Apache Airflow My client have very limited interview slots and they are looking to fill this vacancy ASAP. I have limited slots for 1st stage interviews next week so if More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
SQL, and Python. Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DBT expertise Terraform experience Expert SQL and Python Data Modelling Data Vault Apache Airflow My client have very limited interview slots and they are looking to fill this vacancy ASAP. I have limited slots for 1st stage interviews next week so if More ❯
opportunities to optimise data workflows, adopt emerging technologies, and enhance analytics capabilities. Requirements: Technical Proficiency: Hands-on experience building ETL/ELT pipelines with Python, SQL, or tools like Apache Airflow, and expertise in visualisation tools (Power BI, Tableau, or Looker). Cloud Expertise: Familiarity with cloud platforms like Snowflake, Databricks, or AWS/GCP/Azure for scalable More ❯
Brighton, England, United Kingdom Hybrid / WFH Options
JR United Kingdom
opportunities to optimise data workflows, adopt emerging technologies, and enhance analytics capabilities. Requirements: Technical Proficiency : Hands-on experience building ETL/ELT pipelines with Python, SQL, or tools like Apache Airflow, and expertise in visualisation tools (Power BI, Tableau, or Looker). Cloud Expertise : Familiarity with cloud platforms like Snowflake, Databricks, or AWS/GCP/Azure for scalable More ❯
South East London, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
SQL, and Python. Requirements: 3+ Years data engineering experience Snowflake experience Proficiency across an AWS tech stack DBT expertise Terraform experience Expert SQL and Python Data Modelling Data Vault Apache Airflow My client have very limited interview slots and they are looking to fill this vacancy ASAP. I have limited slots for 1st stage interviews next week so if More ❯
experience in Data Engineering or a related field Strong proficiency in Python (PySpark, Pandas) and SQL Experience with cloud platforms (AWS, GCP, or Azure) Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) is a plus How to Apply: Fill out the application form here: https://docs.google.com/forms/d/e/1FAIpQLSdnd7LZnaxgJf438qrP7O_8pWAptGF8nYUqpA8L-vI0NiEsKg More ❯
containerization and CI/CD tools (e.g., Docker, GitHub Actions). Knowledge of networking and cloud infrastructure (e.g., AWS, Azure). Experience with modern data processing frameworks (e.g., dbt, Apache Airflow, Spark, or similar). Requirements A strong focus on system observability and data quality. Emphasis on rapid scalability of solutions ( consider market ramp up when entering a new More ❯
Newcastle upon Tyne, England, United Kingdom Hybrid / WFH Options
Noir
They're Looking For: Experience in a data-focused role, with a strong passion for working with data and delivering value to stakeholders. Strong proficiency in SQL, Python, and Apache Spark , with hands-on experience using these technologies in a production environment. Experience with Databricks and Microsoft Azure is highly desirable. Financial Services experience is a plus but not More ❯
understanding of Python and the machine learning ecosystem in Python (Numpy, Pandas, Scikit-learn, LightGBM, PyTorch)Knowledge of SQL and experience with relational databasesAgile, action-oriente Nice to have Apache SparkExperience working in cloud platforms (AWS, GCP, Microsoft Azure)Relevant knowledge or experience in the gaming industry #J-18808-Ljbffr More ❯