City of London, London, United Kingdom Hybrid / WFH Options
CBSbutler Holdings Limited
operational dashboards. - Advanced proficiency with Microsoft BI Stack: SSIS, SSRS - Strong SQL Server skills and SQL querying experience - Hands-on experience with Google Cloud Platform tools including: BigQuery; Composer; Apache Airflow; Stream; Informatica; Vertex AI - Tableau dashboard development and reporting - Python programming for data analysis - Data modelling (warehouse, lakehouse, medallion architecture) - Understanding of financial and insurance data models - Insurance More ❯
City of London, London, United Kingdom Hybrid / WFH Options
SR2 | Socially Responsible Recruitment | Certified B Corporation™
skills and attention to detail Excellent communication and teamwork Must-Haves Mobile development experience (React Native/Cordova/Expo) Familiarity with AI frameworks (OpenAI, Claude, or AWS Bedrock) Apache, Nginx, Linux experience UX/UI design knowledge Product management experience Bachelor’s degree in Computer Science (Master’s a plus) Qualities We Value Customer obsession – putting user experience More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
SR2 | Socially Responsible Recruitment | Certified B Corporation™
skills and attention to detail Excellent communication and teamwork Must-Haves Mobile development experience (React Native/Cordova/Expo) Familiarity with AI frameworks (OpenAI, Claude, or AWS Bedrock) Apache, Nginx, Linux experience UX/UI design knowledge Product management experience Bachelor’s degree in Computer Science (Master’s a plus) Qualities We Value Customer obsession – putting user experience More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, Apache Spark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and trade More ❯
london (city of london), south east england, united kingdom
Movement8
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, Apache Spark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and trade More ❯
and Python programming languages. · Strong understanding of graph databases (e.g., RDF, Neo4j , GraphDB). · Experience with data modeling and schema design. · Knowledge of data pipeline tools and frameworks (e.g., Apache Airflow, Luigi). · Excellent problem-solving and analytical skills. · Ability to work independently and as part of a team. Clinical knowledge More ❯
london (city of london), south east england, united kingdom
Vallum Associates
and Python programming languages. · Strong understanding of graph databases (e.g., RDF, Neo4j , GraphDB). · Experience with data modeling and schema design. · Knowledge of data pipeline tools and frameworks (e.g., Apache Airflow, Luigi). · Excellent problem-solving and analytical skills. · Ability to work independently and as part of a team. Clinical knowledge More ❯
ll do Lead the architecture and evolution of large scale streaming and backend systems, ensuring they are resilient, performant, and future-ready Apply deep knowledge of technologies such as Apache Flink, Kafka, and Spark to design reliable data pipelines and event driven systems Leverage your experience with modern backend languages such as Java, Scala, or Go to build and More ❯
london (city of london), south east england, united kingdom
Signify Technology
ll do Lead the architecture and evolution of large scale streaming and backend systems, ensuring they are resilient, performant, and future-ready Apply deep knowledge of technologies such as Apache Flink, Kafka, and Spark to design reliable data pipelines and event driven systems Leverage your experience with modern backend languages such as Java, Scala, or Go to build and More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Be Doing You'll be a key contributor to the development of a next-generation data platform, with responsibilities including: Designing and implementing scalable data pipelines using Python and Apache Spark Building and orchestrating workflows using AWS services such as Glue , Lambda , S3 , and EMR Serverless Applying best practices in software engineering: CI/CD , version control , automated testing … and modular design Supporting the development of a lakehouse architecture using Apache Iceberg Collaborating with product and business teams to deliver data-driven solutions Embedding observability and quality checks into data workflows Participating in code reviews, pair programming, and architectural discussions Gaining domain knowledge in financial data and sharing insights with the team What They're Looking For Core … for experience with type hints, linters, and testing frameworks like pytest) Solid understanding of data engineering fundamentals: ETL/ELT, schema evolution, batch processing Experience or strong interest in Apache Spark for distributed data processing Familiarity with AWS data tools (e.g., S3, Glue, Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and engaging More ❯
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯
london (city of london), south east england, united kingdom
Capgemini
Spark/Scala Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining large-scale data processing pipelines using Apache Spark and Scala. You will work closely with data scientists, analysts, and engineers to build efficient data solutions and enable data-driven decision-making. Key Responsibilities: Develop, optimize, and … maintain data pipelines and ETL processes using Apache Spark and Scala. Design scalable and robust data processing solutions for batch and real-time data. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Perform data ingestion, transformation, and cleansing from various structured and unstructured sources. Monitor and troubleshoot Spark jobs, ensuring high performance and More ❯