Data Engineer, Commodities £135,000 Basic + Extensive Benefits, London My Commodities Trading Client are looking for a Data Engineer to join the trade surveillance team, develop ingestion pipelines and frameworks to support strategic decision making. The team has recently built a new data lake house and ingestion infrastructure. It is entering a phase … and re-engineering to enhance the speed, ease, and quality of ingestion across the portfolio. Key Requirements: Strong proficiency in Python and PySpark Successful track history in a Data Engineering role including Database Management (ideally SQL), Data Orchestration (ideally Apache Airflow or Dagster), Containerisation (ideally Kubernetes and Docker) .Data Pipelines (big data technologies and More ❯
Data Engineer, Commodities £135,000 Basic + Extensive Benefits, London My Commodities Trading Client are looking for a Data Engineer to join the trade surveillance team, develop ingestion pipelines and frameworks to support strategic decision making. The team has recently built a new data lake house and ingestion infrastructure. It is entering a phase … and re-engineering to enhance the speed, ease, and quality of ingestion across the portfolio. Key Requirements: Strong proficiency in Python and PySpark Successful track history in a Data Engineering role including Database Management (ideally SQL), Data Orchestration (ideally Apache Airflow or Dagster), Containerisation (ideally Kubernetes and Docker) .Data Pipelines (big data technologies and More ❯
new technologies Perception Model Lifecycle Management: Lead the end-to-end development of scalable ML models for critical perception tasks (object detection, tracking, scene understanding, mapping), including MLOps, datapipelines, and optimization for robust onboard deployment Team Leadership & Mentorship: Build, mentor, and lead a high-performing team of machine learning engineers, fostering a culture of innovation, technical excellence … limitation, such as those covered by the Americans with Disabilities Act, that requires accommodations to assist you in the search and application process, please email us . Candidate Data Privacy Rivian may collect, use and disclose your personal information or personal data (within the meaning of the applicable data protection laws) when you apply … for employment and/or participate in our recruitment processes ("Candidate Personal Data"). This data includes contact, demographic, communications, educational, professional, employment, social media/website, network/device, recruiting system usage/interaction, security and preference information. Rivian may use your Candidate Personal Data for the purposes of (i) tracking interactions with More ❯
possible in research. We do impactful work with amazing organisations ranging from well-known brands to life-changing non-profits. Join us! About the role As a Senior Data Scientist you will be instrumental in shaping the technical backbone of the analytics function at Focaldata. In this role, you'll develop advanced R tools and scalable solutions to … implementation of cutting-edge modelling techniques. You'll collaborate on diverse projects, enhance our analytics infrastructure, and mentor team members in best practices. If you're passionate about data science, skilled in R, and eager to make an impact in public opinion and research, we'd love to hear from you. What you'll do This is a … will be: Build internal R packages to automate and standardise core tasks like weighting, modelling, and charting. Develop scalable tooling to support our research and analytics teams-from datapipelines to custom dashboards. Lead the development and deployment of advanced modelling techniques (e.g. multilevel regression, segmentation, random forest models, text analysis). Collaborate with the wider team on More ❯
or Cardiff, UK (Hybrid/Flexible) Salary: £80,000 - £90,000 DOE (Plus equity options) About Us We are a proptech company at the forefront of AI and data innovation, partnering with leading insurance firms and major banks to deliver intelligent, scalable solutions for the property sector. Our work blends deep technical expertise with the reliability, security, and … compliance required by highly regulated industries. We design and build systems that collect, integrate, and process hundreds of data sources - from live APIs and large-scale web crawls to internal datasets - and connect them to cutting-edge AI models, including fine-tuned LLMs and retrieval-augmented generation (RAG) pipelines. Our solutions enable smarter property decisions, faster operations, and … financial services sectors. Culture of innovation combined with enterprise-grade quality and governance. Apply to with your CV and a short description of your most impactful RAG or AI pipeline project. More ❯
maintain scalable, secure Python-based back-end services and APIs Develop and deploy cloud-native applications on AWS (e.g., Lambda, ECS, S3, RDS) Integrate with third-party financial data providers and internal portfolio systems Collaborate closely with front-end developers, DevOps, and business stakeholders Write clean, testable code and contribute to CI/CD pipelines and automated deployments … and deployment workflows Desirable: Experience with alternative asset strategies such as private credit, real estate, or private equity Exposure to event-driven systems (Kafka, Redis) and ETL/datapipelines Familiarity with portfolio management tools or investment data modeling Knowledge of security and compliance requirements in financial cloud environments Additional Information Competitive salary + discretionary bonus More ❯
Stevenage, England, United Kingdom Hybrid / WFH Options
Henderson Scott
Data Engineer - Generative AI Projects 📍 Hybrid | 2-3 Days On-Site | Up to £55,000 + Bonus + Package 🔒 British Citizens Only | BPSS Clearance Required We're hiring a Data Engineer with a passion for automation, clean pipelines, and the future of AI. This is your chance to shape how data powers Generative AI … on innovative internal use-cases - from AI assistants to intelligent search and document automation - this is the perfect next step. 🔧 What You'll Be Doing Building and maintaining datapipelines across structured & unstructured sources Collaborating with internal teams to support Generative AI and NLP projects Ensuring data is secure, compliant, high-quality, and easy to access … customers across multiple teams and functions 🧠 What You'll Bring Experience with SQL & NoSQL databases (e.g. MS SQL, MongoDB, Neo4J) Python skills for scripting and automation ETL and data exchange experience (e.g. APIs, ESB tools) Knowledge of Big Data (e.g. Hadoop) Curiosity about AI, particularly NLP, OCR or Generative AI 🌟 Bonus Points For Docker/containerisation More ❯
focus on overcoming the challenges associated with large-scale model training and processing of vast amounts of diverse design data. Your expertise in distributed systems, ML infrastructure, and data engineering will be crucial in developing the next generation of ML-powered product features that will help our customers imagine, design, and make a better world. You'll be … you're most productive while maintaining meaningful connections with colleagues. Responsibilities Support AI researchers by building scalable ML training pipelines and infrastructure for foundation model development Design efficient data processing workflows for large-scale design datasets and industry-specific file formats Optimize distributed training systems and develop solutions for model parallelism, checkpointing, and efficient resource management Analyze performance … AWS, Azure, etc.) Familiarity with version control, CI/CD, and deployment pipelines Excellent written documentation skills to document code, architectures, and experiments Preferred Qualifications Experience with AEC data formats (e.g., BIM models, IFC files, CAD files, Drawing Sets) Knowledge of the AEC industry and its specific data processing challenges Experience scaling ML training and dataMore ❯
Cambourne, Cambridgeshire, United Kingdom Hybrid / WFH Options
Remotestar
on nurturing Al talent, fostering knowledge sharing, and continuously evolving Al practices across the organization Location: Chennai, Onsite Experience: • 11+ years of experience in Al, machine learning, or data science , with a proven track record of delivering Al solutions. • 7+ years of experience in a leadership or architecture role, ideally with some experience in leading a Centre of … multiple industries is advantageous (e.g., healthcare, finance, retail). Skills: AI/ML Expertise: Strong understanding of machine learning algorithms, deep learning, natural language processing, computer vision, and data-driven problem-solving techniques. Architecture Skills: Proven ability to design and architect scalable, reliable, and high-performance Al solutions. Leadership and Communication: Excellent leadership skills with the ability to … in solution design, integration, and deployment. Consulting and Advisory: Work closely with stakeholders to identify business requirements and translate them into Al-powered solutions, including machine learning models, datapipelines, and Al- driven processes. Platform Selection and Integration: Evaluate and select appropriate Al tools, platforms, and technologies to meet business goals. Oversee integration with existing systems, ensuring scalability More ❯
develop, and deploy end-to-end AI/ML-powered features and applications using Python and modern frameworks. Build, integrate, and maintain machine learning models and solutions-from datapipelines to API deployment and user-facing interfaces. Collaborate with product managers, compliance experts, and engineers to translate business needs into intelligent technologies. Develop robust backend systems and APIs … models (LLMs, predictive/classification, agentic workflows), ensuring security and compliance standards are met. Implement best practices for MLOps, including CI/CD, automated testing, model versioning, and data validation. Architect integrations with vector databases, cloud platforms, and retrieval-augmented generation systems. Qualifications: 3+ years of experience delivering full-stack AI/ML applications in production. Proficiency in More ❯
The Capital IQ Solutions Data Science team supports the S&P Capital IQ Pro platform with innovative Data Science and Machine Learning solutions, utilizing the most advanced NLP Generative AI models. This role presents a unique opportunity for hands-on ML/NLP/Gen AI/LLM scientists and engineers to advance to the next … synthetic evaluation methods and metrics. Deploy NLP models while ensuring low latency, reliability, and scalability. Explore new methods for prompt engineering, model fine-tuning, optimization, document embeddings, and data chunking. Collaborate closely with product teams, business stakeholders, and engineers to ensure seamless integration of NLP models into production systems. Troubleshoot complex issues related to machine learning model development … and datapipelines, developing innovative solutions as needed. Actively research and identify the latest relevant methods and technologies in the field. What We're Looking For: Basic Required Qualifications: A degree in Computer Science, Mathematics, Statistics, Engineering or a related field. A solid understanding of Machine Learning and Deep Learning methods, along with their mathematical foundations. At least More ❯
services using .NET Core/.NET 6+ Collaborate efficiently on an AI-driven project alongside the customers Lead Backend Engineer Work closely with ML Engineers, App Developers, and Data teams for integrated feature delivery Design and build RESTful APIs to expose AI models and LLM prompt-based services Integrate trained ML models into backend systems supporting real product … features Build and maintain infrastructure for data ingestion, processing, and secure storage Support full end-to-end testing with QA and development teams to prevent security or data issues Create services to manage data ingestion for model training and dynamic personalization Implement access controls, password policies, and SSO using existing client authentication models Secure … APIs and datapipelines with encryption and appropriate authentication measures Required Skills and Experience Proven experience in .NET Core/.NET 6+ development Strong understanding of REST API design and backend microservices architecture Experience working on AI/ML integrations in production environments Familiarity with secure coding practices and enterprise-level data protection standards Comfortable with More ❯
Banking Clients. Support the implementation & testing of various risk change programs within the firm, times series checks & implementation including sanity check, validating, imputing Support in developing and optimizing datapipelines for risk calculations, trade analytics and regulatory reporting The role involves designing and implementing an effective data and controls framework for various Risk programs Work on … cleansing historical time series, imputing-backfill missing values, implementing strategic controls for ensuring time series data quality Review & optimize the VaR back testing process, root cause findings of VBT exceptions Partner with technology, reconciliation unit, and data governance team, addressing exceptions and managing down applicable risk reduction metrics. Liaise with stakeholders (Department business team, IT, cross … department teams) involved in Data lineage work effort to manage multiple hands off and entailing stitching activities to build end-to-end data lineage Help risk manager to analyse & quantify the change impact, this may include digging through large data sets to understand the pattern/issue, replicating the calculation independently using banks pricing More ❯
Banking Clients. Support the implementation & testing of various risk change programs within the firm, times series checks & implementation including sanity check, validating, imputing Support in developing and optimizing datapipelines for risk calculations, trade analytics and regulatory reporting The role involves designing and implementing an effective data and controls framework for various Risk programs Work on … cleansing historical time series, imputing-backfill missing values, implementing strategic controls for ensuring time series data quality Review & optimize the VaR back testing process, root cause findings of VBT exceptions Partner with technology, reconciliation unit, and data governance team, addressing exceptions and managing down applicable risk reduction metrics. Liaise with stakeholders (Department business team, IT, cross … department teams) involved in Data lineage work effort to manage multiple hands off and entailing stitching activities to build end-to-end data lineage Help risk manager to analyse & quantify the change impact, this may include digging through large data sets to understand the pattern/issue, replicating the calculation independently using banks pricing More ❯
Banking Clients. Support the implementation & testing of various risk change programs within the firm, times series checks & implementation including sanity check, validating, imputing Support in developing and optimizing datapipelines for risk calculations, trade analytics and regulatory reporting The role involves designing and implementing an effective data and controls framework for various Risk programs Work on … cleansing historical time series, imputing-backfill missing values, implementing strategic controls for ensuring time series data quality Review & optimize the VaR back testing process, root cause findings of VBT exceptions Partner with technology, reconciliation unit, and data governance team, addressing exceptions and managing down applicable risk reduction metrics. Liaise with stakeholders (Department business team, IT, cross … department teams) involved in Data lineage work effort to manage multiple hands off and entailing stitching activities to build end-to-end data lineage Help risk manager to analyse & quantify the change impact, this may include digging through large data sets to understand the pattern/issue, replicating the calculation independently using banks pricing More ❯
Banking Clients. Support the implementation & testing of various risk change programs within the firm, times series checks & implementation including sanity check, validating, imputing Support in developing and optimizing datapipelines for risk calculations, trade analytics and regulatory reporting The role involves designing and implementing an effective data and controls framework for various Risk programs Work on … cleansing historical time series, imputing-backfill missing values, implementing strategic controls for ensuring time series data quality Review & optimize the VaR back testing process, root cause findings of VBT exceptions Partner with technology, reconciliation unit, and data governance team, addressing exceptions and managing down applicable risk reduction metrics. Liaise with stakeholders (Department business team, IT, cross … department teams) involved in Data lineage work effort to manage multiple hands off and entailing stitching activities to build end-to-end data lineage Help risk manager to analyse & quantify the change impact, this may include digging through large data sets to understand the pattern/issue, replicating the calculation independently using banks pricing More ❯
london (city of london), south east england, united kingdom
Crisil
Banking Clients. Support the implementation & testing of various risk change programs within the firm, times series checks & implementation including sanity check, validating, imputing Support in developing and optimizing datapipelines for risk calculations, trade analytics and regulatory reporting The role involves designing and implementing an effective data and controls framework for various Risk programs Work on … cleansing historical time series, imputing-backfill missing values, implementing strategic controls for ensuring time series data quality Review & optimize the VaR back testing process, root cause findings of VBT exceptions Partner with technology, reconciliation unit, and data governance team, addressing exceptions and managing down applicable risk reduction metrics. Liaise with stakeholders (Department business team, IT, cross … department teams) involved in Data lineage work effort to manage multiple hands off and entailing stitching activities to build end-to-end data lineage Help risk manager to analyse & quantify the change impact, this may include digging through large data sets to understand the pattern/issue, replicating the calculation independently using banks pricing More ❯
Software Engineer, Analytics & Data Engineering London, England, United Kingdom Software and Services Description The ASE Analytics & Data Engineering team is responsible for building analytics platforms, datasets and processes required by Apple for analysing and powering customer experiences. This means we build computation platforms and datasets to empower our product, marketing, feature, analytic and data … complexity of our datasets, this is not a trivial task. We are looking for an outstanding Software Engineer who can effectively collaborate with our partner teams to deliver data engineering solutions to improve and power the next generation of Apple features.You will be working on cross-functional projects with other engineering teams, product leads and analytics leaders to … build insights, metrics and data pipelines. The projects you will be working on will be truly impactful. You will have the freedom to innovate as you work closely with our partners to drive meaningful change and build elegant systems to deliver the results.The ideal candidate will have a strong quality focus and be motivated by taking early production More ❯
via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and … in at least two major cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD … to support and coach internal teams. Key Indicators of Success: ? Spark jobs are performant, fault-tolerant, and integrated into InfoSum's platform with minimal overhead. ? Cost of running data processing workloads is optimized across cloud environments. ? Engineering teams are equipped with best practices for writing, deploying, and monitoring Spark workloads. ? Operational issues are rapidly identified and resolved, with More ❯
via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers). ? Work with platform engineers to embed Spark jobs into InfoSum's platform APIs and data pipelines. ? Troubleshoot job failures, memory and resource issues, and execution anomalies across various runtime environments. ? Optimize Spark job performance and advise on best practices to reduce cloud compute and … in at least two major cloud environments (AWS, GCP, Azure). ? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration. ? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob). ? Hands-on experience with Docker, Kubernetes, and CI/CD … to support and coach internal teams. Key Indicators of Success: ? Spark jobs are performant, fault-tolerant, and integrated into InfoSum's platform with minimal overhead. ? Cost of running data processing workloads is optimized across cloud environments. ? Engineering teams are equipped with best practices for writing, deploying, and monitoring Spark workloads. ? Operational issues are rapidly identified and resolved, with More ❯
Reading, Berkshire, United Kingdom Hybrid / WFH Options
Opus Recruitment Solutions Ltd
AWS Data Engineer - Contract Location: Reading (Hybrid - 1-2 days/month onsite) Rate: £500-550/day (Inside IR35) Start Date: ASAP Duration: 5 months (with potential for extension) A leading financial services organisation is seeking an experienced AWS Data Engineer to join their Compliance Reporting team. This backend-focused role involves designing and deploying … scalable data solutions that support the delivery of regulatory compliance reports across the business. You'll work with a modern AWS stack and infrastructure-as-code tools to build robust datapipelines and applications that process complex datasets from multiple operational systems. Key Responsibilities: Build and maintain AWS-based ETL/ELT pipelines using S3, Glue … PySpark/Python), Lambda, Athena, Redshift, and Step Functions Develop backend applications to automate and support compliance reporting Process and validate complex data formats including nested JSON, XML, and CSV Collaborate with stakeholders to deliver technical solutions aligned with regulatory requirements Manage CI/CD workflows using Bitbucket, Terraform, and Atlantis Support database management and improve dataMore ❯
For more information, contact Mieke Van Tonder at . This is an exciting opportunity for someone passionate about technology, data, and machine learning, eager to contribute to a collaborative environment. The Data Scientist will build and deploy machine learning models to support personalization, recommendations, anomaly detection, and data insights. The role involves working with … as monitoring, continuous integration, and automated retraining. Utilize AI-assisted development tools like Cursor and Copilot to improve productivity. Collaborate with engineers, DevOps, and leadership to ensure robust datapipelines and translate business needs into technical solutions. Requirements At least 3 years of experience in applied machine learning and deploying production models. Proficiency in Python, SQL, and frameworks … learn. Experience with AWS services and Databricks; understanding of ML Ops is highly beneficial. Ability to adapt quickly to new tools and deliver scalable solutions independently. Familiarity with datapipelines involving Kafka, Debezium, S3, Lambda, and Delta Lake is a plus. Note: Some irrelevant content and personal opinions were removed for clarity and focus. This job posting is More ❯
s eCommerce ontology - the authoritative source of product knowledge driving exceptional customer experiences. Applied Scientists in this role solve problems related to product classification, attribute extraction, ontology modeling, data integration and enrichment, and scalable knowledge services. It's challenging due to the vast scale, heterogeneous data sources, and evolving domains, but exciting for pushing boundaries in … from you! Key job responsibilities - Lead the research and development of novel AI solutions to enrich and curate Amazon's product ontology (Product Knowledge) at scale - Develop scalable data processing pipelines and architectures to ingest, transform, and enrich product data from various sources (seller listings, customer reviews, etc.) - Collaborate with engineers to design and implement robust … them to Product Knowledge A day in the life The Amazon product ontology is a structured knowledge base representing product types, attributes, classes, and relationships. It standardizes product data, enabling enhanced customer experiences through improved search and recommendations, streamlined selling processes, and internal data enrichment across Amazon's eCommerce ecosystem. You will work with following stakeholders More ❯
BI Developer/Data Analyst (Power BI) Location: West Midlands Salary: Up to £65,000 Type: Full-time, Office-based About the Role An established food manufacturer and supplier is implementing their first-ever ERP system, focusing on four key modules: Manufacturing, Compliance, Reliability, and Learning. Each module will take approximately 4-6 months to complete. To support … and maintain Power BI dashboards and reports delivering actionable insights. Collaborate with internal stakeholders and external implementation partners throughout the ERP rollout and ongoing system maintenance. Develop robust datapipelines and write advanced SQL queries for ETL processes and data analysis. Migrate and integrate data between systems to ensure seamless communication and reporting. Continuously … monitor and improve BI tools post-ERP implementation. Analyse large datasets to uncover trends supporting strategic decision-making. Ensure data integrity, security, and automation of workflows to maintain reliability and efficiency. Benefits Competitive salary up to £65,000 25 days holiday plus bank holidays Onsite parking available Employee rewards and retail discounts Pension scheme (details to be confirmed More ❯
on highly impactful problems Promote a positive culture of collaboration, through open and effective communication, particularly when addressing issues or raising concerns. Are able to form well reasoned, data driven or otherwise evidence based arguments, to influence key stakeholders across the business. Required Skills and Experience Has 5+ years commercial experience Expert level in (Javascript Typescript Python) Familiar … with Postgres and K8s Feels at home in AWS console Has built infrastructure with Terraform Bonus points Has worked in an intelligence collection setting Experience with "big data" technologies, the management of data, and datapipelines Familiarity with functional programming concepts Has run production workloads of 1000s QPS Has been part of an "on More ❯