contract assignment. Key Requirements: Proven background in AI and data development Strong proficiency in Python , including data-focused libraries such as Pandas, NumPy, and PySpark Hands-on experience with ApacheSpark (PySpark preferred) Solid understanding of data management and processing pipelines Experience in algorithm development and graph data structures is advantageous Active SC Clearance is mandatory Role Overview … You will play a key role in developing and delivering advanced AI solutions for a Government client . Responsibilities include: Designing, building, and maintaining data processing pipelines using ApacheSpark Implementing ETL/ELT workflows for large-scale data sets Developing and optimising Python-based data ingestion tools Collaborating on the design and deployment of machine learning models … performance across distributed systems Contributing to data architecture and storage strategy design Working with cloud data platforms (AWS, Azure, or GCP) to deploy scalable solutions Monitoring, troubleshooting, and tuning Spark jobs for performance and cost efficiency Engaging regularly with customers and internal stakeholders This is an excellent opportunity to join a high-profile organisation on a long-term contract More ❯
In order to be successful, you will have the following experience: Extensive AI & Data Development background Experiences with Python (including data libraries such as Pandas, NumPy, and PySpark) and ApacheSpark (PySpark preferred) Strong experience with data management and processing pipelines Algorithm development and knowledge of graphs will be beneficial SC Clearance is essential Within this role, you … will be responsible for: Supporting the development and delivery of AI solution to a Government customer Design, develop, and maintain data processing pipelines using ApacheSpark Implement ETL/ELT workflows to extract, transform and load large-scale datasets efficiently Develop and optimize Python-based applications for data ingestion Collaborate on development of machine learning models Ensure data … to the design of data architectures, storage strategies, and processing frameworks Work with cloud data platforms (e.g., AWS, Azure, or GCP) to deploy scalable solutions Monitor, troubleshoot, and optimize Spark jobs for performance and cost efficiency Liaise with customer and internal stakeholders on a regular basis This represents an excellent opportunity to secure a long term contract, within a More ❯
AI: Practical experience with Deep Learning frameworks (e.g., TensorFlow, PyTorch) for applications like NLP (Transformer models, BERT) or computer vision. Big Data Tools: Experience with big data platforms like Spark (PySpark) for handling large-scale datasets. MLOps: Familiarity with MLOps tools and concepts (e.g., Docker, Kubernetes, MLflow, Airflow) for model deployment and lifecycle management. Financial Domain Knowledge: Direct experience More ❯
troubleshoot data workflows and performance issues Essential Skills & Experience: Proficiency in SQL , Python , or Scala Experience with cloud platforms such as AWS, Azure, or GCP Familiarity with tools like ApacheSpark , Kafka , and Airflow Strong understanding of data modelling and architecture Knowledge of CI/CD pipelines and version control systems Additional Information: This role requires active SC More ❯
City of London, London, United Kingdom Hybrid / WFH Options
ECS
cloud data engineering, with a strong focus on building scalable data pipelines Expertise in Azure Databricks, including building and managing ETL pipelines using PySpark or Scala Solid understanding of ApacheSpark, Delta Lake, and distributed data processing concepts Hands-on experience with Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics Proficiency in SQL and Python More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Syntax Consultancy Limited
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
EC4N 6JD, Vintry, United Kingdom Hybrid / WFH Options
Syntax Consultancy Ltd
data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL/MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred More ❯
Bromley, Kent, England, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
experience with AWS data platforms and related services. Solid grasp of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies such as Spark and Hadoop, and distributed computing concepts. Proficiency in SQL and at least one programming language (e.g., Python, Java). Preferred Qualifications: Relevant certifications in data architecture, cloud platforms, or More ❯
Bromley, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
experience with AWS data platforms and related services. Solid grasp of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies such as Spark and Hadoop, and distributed computing concepts. Proficiency in SQL and at least one programming language (e.g., Python, Java). Preferred Qualifications: Relevant certifications in data architecture, cloud platforms, or More ❯
scripting (Python, Bash) and programming (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the ability More ❯
with AWS data platforms and their respective data services. Solid understanding of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies (e.g., Spark, Hadoop) and distributed computing. Proficiency in SQL and at least one programming language (e.g., Python, Java) 6 Month Contract Inside IR35 Immediately available London up to 2 times a More ❯
with AWS data platforms and their respective data services. Solid understanding of data governance principles, including data quality, metadata management, and access control. Familiarity with big data technologies (e.g., Spark, Hadoop) and distributed computing. Proficiency in SQL and at least one programming language (e.g., Python, Java) 6 Month Contract Inside IR35 Immediately available London up to 2 times a More ❯
ensure data integrity and reliability. Optimise data workflows for performance, cost-efficiency, and maintainability using tools such as Azure Data Factory, AWS Data Pipeline for Data Orchestration, Databricks, or Apache Spark. Support the integration of data into visualisation platforms (e.g. Power BI, ServiceNow) and other analytical environments. Ensure compliance with data governance, security, and privacy policies. Document data architecture More ❯
and ETL/ELT processes. Proficiency in AWS data platforms and services. Solid understanding of data governance principles (data quality, metadata, access control). Familiarity with big data technologies (Spark, Hadoop) and distributed computing. Advanced SQL skills and proficiency in at least one programming language (Python, Java). Additional Requirements Immediate availability for an October start. Must be UK More ❯
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
advantagioous Good communication skills Experience in Python and/or Java development Experience in git and basic Unix Commands You may also have Experience with large data processing technologies (ApacheSpark) Other helpful information Hybrid Working: At EMBL-EBI we are pleased to offer hybrid working options for all our employees. Our team work at least two days More ❯
AWS) to join a contract till April 2026. Inside IR35 SC cleared Weekly travel to Newcastle Around £400 per day Contract till April 2026 Skills: - Python - AWS Services - Terraform - ApacheSpark - Airflow - Docker More ❯
Newcastle Upon Tyne, Tyne and Wear, England, United Kingdom
Opus Recruitment Solutions Ltd
SC cleared Software developers (Python & AWS) to join a contract till April 2026.Inside IR35SC clearedWeekly travel to Newcastle Around £400 per dayContract till April 2026Skills:- Python- AWS Services- Terraform- ApacheSpark- Airflow- Docker More ❯
Preferred: Experience in front-office roles or collaboration with trading desks Familiarity with financial instruments across asset classes (equities, FX, fixed income, derivatives) Experience with distributed computing frameworks (e.g., Spark, Dask) and cloud-native ML pipelines Exposure to LLMs, graph learning, or other advanced AI methods Strong publication record or open-source contributions in ML or quantitative finance Please More ❯
be on designing and maintaining the data pipelines that feed large-scale ML and research workflows. Day-to-day responsibilities include: Building and maintaining data pipelines using Python, SQL, Spark, and Google Cloud technologies (BigQuery, Cloud Storage). Ensuring pipelines are robust, reliable, and optimised for AI/ML use cases. Developing automated tests, documentation, and monitoring for production … best practices, and continuously improving performance and quality. Tech Stack & Skills Core Skills: Strong experience with Python and SQL in production environments Proven track record developing data pipelines using Spark, BigQuery, and cloud tools (preferably Google Cloud) Familiarity with CI/CD and version control (git, GitHub, DevOps workflows) Experience with unit testing (e.g., pytest) and automated quality checks More ❯
team leadership and upskilling responsibilities. Key Responsibilities Build and maintain Databricks Delta Live Tables (DLT) pipelines across Bronze ? Silver ? Gold layers, ensuring quality, scalability, and reliability. Develop and optimise Spark (PySpark) jobs for large-scale distributed processing. Design and implement streaming data pipelines with Kafka/MSK, applying best practices for late event handling and throughput. Use Terraform and … role) Mentor and upskill engineers, define coding standards, and embed engineering excellence across the team. What's Expected Proven experience delivering end-to-end data pipelines in Databricks and Spark environments. Strong understanding of data modelling, schema evolution, and data contract management. Hands-on experience with Kafka, streaming architectures, and real-time processing principles. Proficiency with Docker, Terraform, and More ❯
Employment Type: Contract
Rate: Up to £0.00 per day + Flexible depending on experience