Preferred: Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributed computing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in ML or quantitative finance Please More ❯
New Broadcasting House, Portland Place, London, England
BBC Public Service
networking opportunities to help you take your next step - whether that’s at the BBC or elsewhere in the industry. Apprenticeship Standard Data analyst (level 4) Training Provider CAMBRIDGE SPARK LIMITED Working Week 18 months - 35 hours per week. Days and shifts TBC. Expected Duration 1 Year 6 Months Positions Available 1 Closing Date Thursday, 13th November 2025 Start More ❯
equivalent UCAS points (please ensure A-Level grades are included on your CV). Basic scripting knowledge in Python or Bash Excellent customer-facing skills You have a sales spark - while this role isn't a focussed sales role, this is required due to the nature of the role A motivated self-starter with a problem-solving attitude Strong More ❯
be required in the role; we are happy to support your learning on the job, but prior experience is a plus: Experience with large-scale data processing frameworks (e.g., Spark, Flink). Experience with time series analysis, anomaly detection, or graph analytics in a security context. Proficiency in data visualization tools and techniques to effectively communicate complex findings. A More ❯
on building solutions for the business. In addition, you'll be responsible for the following: Designing, developing, and optimizing end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implementing Medallion Architecture and building scalable ETL/ELT processes with Azure Data Factory and PySpark. Partner with the data architecture function to support data governance … across pipelines. Working collaboratively with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, Delta Lake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver More ❯
on building solutions for the business. In addition, you'll be responsible for the following: Designing, developing, and optimizing end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake. Implementing Medallion Architecture and building scalable ETL/ELT processes with Azure Data Factory and PySpark. Partner with the data architecture function to support data governance … across pipelines. Working collaboratively with analysts to validate and refine datasets for reporting. Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment. Optimize Spark jobs, Delta Lake tables, and SQL queries for performance and cost efficiency. Troubleshoot and resolve data pipeline issues proactively. Partner with Data Architects, Analysts, and Business Teams to deliver More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, ApacheSpark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, ApacheSpark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, ApacheSpark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and More ❯
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, ApacheSpark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and More ❯
london (city of london), south east england, united kingdom
Movement8
assessments and predictive models. Optimize models for performance, scalability, and accuracy. Qualifications: Deep knowledge of neural networks (CNNs, RNNs, LSTMs, Transformers). Strong experience with data tools (Pandas, NumPy, ApacheSpark). Solid understanding of NLP algorithms. Experience integrating ML models via RESTful APIs. Familiarity with CI/CD pipelines and deployment automation. Strategic thinking around architecture and More ❯
ETL pipelines Design, develop, and oversee the reporting pipelines to onboard internal and external data sources provided metrics into reporting and measurement tool, using appropriate technologies (e.g., SQL, Python, Spark, AWS Lambda, etc.) Design, develop and optimize data visualization for the dashboards used across functional teams within XCM Collaborate with stakeholders (marketing, advertising, engineering, and product management) to understand More ❯
job responsibilities • Design, develop, and oversee the reporting pipelines to onboard internal and external data sources provided metrics into reporting and measurement tool, using appropriate technologies (e.g., SQL, Python, Spark, AWS Lambda, etc.) • Design, develop and optimize data visualization for the dashboards used across functional teams within XCM • Collaborate with stakeholders (marketing, advertising, engineering, and product management) to understand More ❯
job responsibilities • Design, develop, and oversee the reporting pipelines to onboard internal and external data sources provided metrics into reporting and measurement tool, using appropriate technologies (e.g., SQL, Python, Spark, AWS Lambda, etc.) • Design, develop and optimize data visualization for the dashboards used across functional teams within XCM • Collaborate with stakeholders (marketing, advertising, engineering, and product management) to understand More ❯
job responsibilities • Design, develop, and oversee the reporting pipelines to onboard internal and external data sources provided metrics into reporting and measurement tool, using appropriate technologies (e.g., SQL, Python, Spark, AWS Lambda, etc.) • Design, develop and optimize data visualization for the dashboards used across functional teams within XCM • Collaborate with stakeholders (marketing, advertising, engineering, and product management) to understand More ❯
london (city of london), south east england, united kingdom
Cpl Life Sciences
job responsibilities • Design, develop, and oversee the reporting pipelines to onboard internal and external data sources provided metrics into reporting and measurement tool, using appropriate technologies (e.g., SQL, Python, Spark, AWS Lambda, etc.) • Design, develop and optimize data visualization for the dashboards used across functional teams within XCM • Collaborate with stakeholders (marketing, advertising, engineering, and product management) to understand More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Lorien
models applied to the context of MMM modelling Solid experience with Probabilistic Programming and Bayesian Methods Be an expert in mining large & very complex data sets using SQL and Spark Have in depth understanding of statistical modelling techniques and their mathematical foundations, Have a good working knowledge of Pymc and cloud-based data science frameworks and toolkits. Working knowledge More ❯
the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything—spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds? You'll More ❯
technologists, and analysts to enhance the quality, timeliness, and accessibility of data. Contribute to the evolution of modern cloud-based data infrastructure , working with tools such as Airflow, Kafka, Spark, and AWS . Monitor and troubleshoot data workflows, ensuring continuous delivery of high-quality, analysis-ready datasets. Play a visible role in enhancing the firm’s broader data strategy … ability in Python (including libraries such as pandas and NumPy ) and proficiency with SQL . Confident working with ETL frameworks , data modelling principles, and modern data tools (Airflow, Kafka, Spark, AWS). Experience working with large, complex datasets from structured, high-quality environments — e.g. consulting, finance, or enterprise tech. STEM degree in Mathematics, Physics, Computer Science, Engineering, or a More ❯
technologists, and analysts to enhance the quality, timeliness, and accessibility of data. Contribute to the evolution of modern cloud-based data infrastructure , working with tools such as Airflow, Kafka, Spark, and AWS . Monitor and troubleshoot data workflows, ensuring continuous delivery of high-quality, analysis-ready datasets. Play a visible role in enhancing the firm’s broader data strategy … ability in Python (including libraries such as pandas and NumPy ) and proficiency with SQL . Confident working with ETL frameworks , data modelling principles, and modern data tools (Airflow, Kafka, Spark, AWS). Experience working with large, complex datasets from structured, high-quality environments — e.g. consulting, finance, or enterprise tech. STEM degree in Mathematics, Physics, Computer Science, Engineering, or a More ❯
technologists, and analysts to enhance the quality, timeliness, and accessibility of data. Contribute to the evolution of modern cloud-based data infrastructure , working with tools such as Airflow, Kafka, Spark, and AWS . Monitor and troubleshoot data workflows, ensuring continuous delivery of high-quality, analysis-ready datasets. Play a visible role in enhancing the firm’s broader data strategy … ability in Python (including libraries such as pandas and NumPy ) and proficiency with SQL . Confident working with ETL frameworks , data modelling principles, and modern data tools (Airflow, Kafka, Spark, AWS). Experience working with large, complex datasets from structured, high-quality environments — e.g. consulting, finance, or enterprise tech. STEM degree in Mathematics, Physics, Computer Science, Engineering, or a More ❯
london (city of london), south east england, united kingdom
Mondrian Alpha
technologists, and analysts to enhance the quality, timeliness, and accessibility of data. Contribute to the evolution of modern cloud-based data infrastructure , working with tools such as Airflow, Kafka, Spark, and AWS . Monitor and troubleshoot data workflows, ensuring continuous delivery of high-quality, analysis-ready datasets. Play a visible role in enhancing the firm’s broader data strategy … ability in Python (including libraries such as pandas and NumPy ) and proficiency with SQL . Confident working with ETL frameworks , data modelling principles, and modern data tools (Airflow, Kafka, Spark, AWS). Experience working with large, complex datasets from structured, high-quality environments — e.g. consulting, finance, or enterprise tech. STEM degree in Mathematics, Physics, Computer Science, Engineering, or a More ❯
team leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze → Silver → Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and … role) Mentor and upskill engineers, define coding standards, and embed engineering excellence across the team. What's Expected Proven experience delivering end-to-end data pipelines in Databricks and Spark environments. Strong understanding of data modelling, schema evolution, and data contract management. Hands-on experience with Kafka, streaming architectures, and real-time processing principles. Proficiency with Docker, Terraform, and More ❯
on engineering role with a strong focus on data, customer collaboration, and real-world outcomes. Essential Requirements Proficiency in Python and/or TypeScript Experience with distributed systems (e.g. Spark, Kafka, Hadoop) Background in AI/ML , ideally with exposure to GenAI or LLMs SC Clearance Strong communication skills and ability to gather requirements from non-technical stakeholders Bachelor … SEO Keywords for Search: Forward Deployed Engineer, FDE, Software Engineer, Data Engineer, Machine Learning Engineer, AI Engineer, GenAI, LLM, Python Developer, TypeScript Developer, Full Stack Engineer, Distributed Systems Engineer, Spark, Kafka, Hadoop, React Developer, Enterprise Software Engineer, Consulting Engineer, Analytics Engineer, Technical Consultant, Palantir Foundry, AIP, Hybrid Software Engineer, London Tech Jobs, Mission-Driven Engineering Roles, FDSE, Python, Java More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Attis
on engineering role with a strong focus on data, customer collaboration, and real-world outcomes. Essential Requirements Proficiency in Python and/or TypeScript Experience with distributed systems (e.g. Spark, Kafka, Hadoop) Background in AI/ML , ideally with exposure to GenAI or LLMs SC Clearance Strong communication skills and ability to gather requirements from non-technical stakeholders Bachelor … SEO Keywords for Search: Forward Deployed Engineer, FDE, Software Engineer, Data Engineer, Machine Learning Engineer, AI Engineer, GenAI, LLM, Python Developer, TypeScript Developer, Full Stack Engineer, Distributed Systems Engineer, Spark, Kafka, Hadoop, React Developer, Enterprise Software Engineer, Consulting Engineer, Analytics Engineer, Technical Consultant, Palantir Foundry, AIP, Hybrid Software Engineer, London Tech Jobs, Mission-Driven Engineering Roles, FDSE, Python, Java More ❯