who is responsible for designing enterprise AI solutions that align with business objectives while ensuring scalability, security, and efficiency. This role translates business needs into AI-driven architectures (across dataingestion, models and operational workflows), ensuring feasibility, cost-effectiveness, and adherence to industry best practices and firmwide standards. The AI Solution Architect works closely with the sales teams … existing IT infrastructure. This role is for you if you have: Proven experience architecting AI-driven solutions for enterprise customers, collaborating with sales and engineering teams. Ability to assess data dependencies, business constraints, and success criteria to deliver AI solutions that meet user and business needs. Expertise in AI Solution Architecture, including LLM/SLM (Large Language Models/… Security & FinOps, including AI landing zones, model refinement and testing, and compliance strategies. Experience in Enterprise AI Design, focusing on scalable, secure, and cost-effective AI integration. Understanding of data science principles, enabling informed evaluation of machine learning and statistical models. What you'll receive from us: No matter where you may be in your career or personal life More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
london (city of london), south east england, united kingdom
algo1
and implement model registries, versioning systems, and experiment tracking to ensure full reproducibility of all model releases. Deploy ML workflows using tools like Airflow or similar, managing dependencies from dataingestion through model deployment and serving. Instrument comprehensive monitoring for model performance, data drift, prediction quality, and system health. Manage infrastructure as code (Terraform, or similar) for … or similar model management systems. Practical knowledge of infrastructure as code, CI/CD best practices, and cloud platforms (AWS, GCP, or Azure). Experience with relational databases and data processing and query engines (Spark, Trino, or similar). Familiarity with monitoring, observability, and alerting systems for production ML (Prometheus, Grafana, Datadog, or equivalent). Understanding of ML concepts. … and reproducibility, but you also enable teams to ship fast. Nice to Have Experience delivering API services (FastAPI, SpringBoot or similar). Experience with message brokers and real-time data and event processing (Kafka, Pulsar, or similar). Why Join Us You'll be part of a small, high-output team where intensity and focus are the norm. You More ❯
and scale reliable, high-performing software in both private and public cloud environments, then the GCI and TRR teams are the perfect fit for you. Here, we focus on Dataingestion, backup, and unified search & Export for our archiving, e-discovery and compliance customers for different data types, as well as delivering Best-in-class user reporting … code. Experience with performance/scalability testing of backend systems and APIs. Experience testing applications that interact with PostgreSQL or similar databases, including writing queries for validation and verifying data integrity. Experience testing applications running in Kubernetes environments. Familiarity with using monitoring and observability tools like Grafana to support test analysis and validation. Experience troubleshooting and supporting customers with … offer of employment will be subject to your successful completion of applicable background checks, conducted in accordance with local law. About Us We save companies the embarrassment of awkward data slip ups by disrupting cybercriminal activity. We think fast, go big and always demand more. We work hard, deliver - and repeat. We grow with meaningful determination. And put success More ❯
frameworks like LangChain, LangGraph, and the Google Agent Development Kit (ADK). Develop and Evaluate RAG Pipelines: Engineer and optimize end-to-end Retrieval-Augmented Generation (RAG) systems, including dataingestion, chunking strategies, and implementing rigorous pipeline evaluation frameworks for accuracy and performance. Fine-Tune & Optimize LLMs: Implement advanced model customization techniques, including PEFT (Parameter-Efficient Fine-Tuning More ❯
frameworks like LangChain, LangGraph, and the Google Agent Development Kit (ADK). Develop and Evaluate RAG Pipelines: Engineer and optimize end-to-end Retrieval-Augmented Generation (RAG) systems, including dataingestion, chunking strategies, and implementing rigorous pipeline evaluation frameworks for accuracy and performance. Fine-Tune & Optimize LLMs: Implement advanced model customization techniques, including PEFT (Parameter-Efficient Fine-Tuning More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
london (city of london), south east england, united kingdom
HCLTech
create strategic Road-map for large enterprise initiatives Must have experience in Legacy Modernization programs Should be proficient at collaborating with cross functional teams Strong Background and experience in Data Ingestions, Transformation, Modeling and Performance tuning. Should have experience in designing and developing dashboards Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Should have experience in creating roadmap to More ❯
really do and delivering rapid-fire prototypes that become mission-critical apps. What you’ll tackle Spot the pain-points across Equities, Credit, Rates and central Risk teams; translate data-flow headaches into buildable tech projects. Prototype at pace in Python—leveraging LLM co-pilots—to stand-up analytics services, REST APIs and lightweight Dash/React front-ends. … Own the full stack: dataingestion (SQL, files, feeds), business logic, web/UI, CI pipelines and Linux deployment. Codify best practice for safe, auditable LLM-assisted development; lead code reviews and knowledge-share sessions. Stay user-facing —white-board with quants, ask clarifying questions, iterate live and keep comms flowing. You in a nutshell 5 + years … building production-grade Python systems for data-heavy businesses; comfortable with tests, version control and optimisation. Confident SQL , exposure to REST and messaging (Rabbit, Kafka, etc.), and enough Linux to troubleshoot on the command line. UI chops in Dash, Flask or React to whip up proof-of-concept dashboards. Solid grounding in financial products & risk metrics —equities, bonds and More ❯
really do and delivering rapid-fire prototypes that become mission-critical apps. What you’ll tackle Spot the pain-points across Equities, Credit, Rates and central Risk teams; translate data-flow headaches into buildable tech projects. Prototype at pace in Python—leveraging LLM co-pilots—to stand-up analytics services, REST APIs and lightweight Dash/React front-ends. … Own the full stack: dataingestion (SQL, files, feeds), business logic, web/UI, CI pipelines and Linux deployment. Codify best practice for safe, auditable LLM-assisted development; lead code reviews and knowledge-share sessions. Stay user-facing —white-board with quants, ask clarifying questions, iterate live and keep comms flowing. You in a nutshell 5 + years … building production-grade Python systems for data-heavy businesses; comfortable with tests, version control and optimisation. Confident SQL , exposure to REST and messaging (Rabbit, Kafka, etc.), and enough Linux to troubleshoot on the command line. UI chops in Dash, Flask or React to whip up proof-of-concept dashboards. Solid grounding in financial products & risk metrics —equities, bonds and More ❯
really do and delivering rapid-fire prototypes that become mission-critical apps. What you’ll tackle Spot the pain-points across Equities, Credit, Rates and central Risk teams; translate data-flow headaches into buildable tech projects. Prototype at pace in Python—leveraging LLM co-pilots—to stand-up analytics services, REST APIs and lightweight Dash/React front-ends. … Own the full stack: dataingestion (SQL, files, feeds), business logic, web/UI, CI pipelines and Linux deployment. Codify best practice for safe, auditable LLM-assisted development; lead code reviews and knowledge-share sessions. Stay user-facing —white-board with quants, ask clarifying questions, iterate live and keep comms flowing. You in a nutshell 5 + years … building production-grade Python systems for data-heavy businesses; comfortable with tests, version control and optimisation. Confident SQL , exposure to REST and messaging (Rabbit, Kafka, etc.), and enough Linux to troubleshoot on the command line. UI chops in Dash, Flask or React to whip up proof-of-concept dashboards. Solid grounding in financial products & risk metrics —equities, bonds and More ❯
design, test and deploy AI projects. Azure AI/ML Engineer, key responsibilities: Build, develop and deploy AI applications using Python Design and Develop AI services Setup and develop dataingestion pipelines and components Develop search related components using Azure AI Search Developing and deploying AI/ML models Built and maintain scalable, high-performance AI apps on More ❯
design, test and deploy AI projects. Azure AI/ML Engineer, key responsibilities: Build, develop and deploy AI applications using Python Design and Develop AI services Setup and develop dataingestion pipelines and components Develop search related components using Azure AI Search Developing and deploying AI/ML models Built and maintain scalable, high-performance AI apps on More ❯
I am working with a client in the education sector who are looking for a data engineer with experience across architect & strategy to join on a part-time 12 month contract.1-2 days per weekFully remoteOutside IR35Immediate start12 month contract Essential Been to school in the UK DataIngestion of APIs GCP based (Google Cloud Platform) Snowflake More ❯
I am working with a client in the education sector who are looking for a data engineer with experience across architect & strategy to join on a part-time 12 month contract. 1-2 days per week Fully remote Outside IR35 Immediate start 12 month contract Essential Been to school in the UK DataIngestion of APIs GCP More ❯
built models for digital human, voice localization and best of breed image & video generation models in the industry. The current products include: An agentic AI platform with high-quality dataingestion and conversational interfaces. Tools for content creation, automation, and reuse via AI-powered content studio agents. Media asset management, video collaboration, and supply chain components designed for … high-performance Product team including PMs, POs, and UI/UX • Instil a performance culture with measurable success criteria (OKRs, adoption, monetisation, retention). • Foster close partnership with Engineering, Data, Commercial, and GTM leads. Cross-Functional and Executive Alignment • Act as a bridge between technical innovation and executive vision. Collaborate with the Chief Innovation Officer, CTO, Product Owners and … cross-functional collaboration skills. Preferred Qualifications Background in AI-driven, synthetic media, or generative content technologies. Familiarity with product licensing models (SaaS, enterprise sales, usage-basedpricing). Understanding of data provenance, consent management, and ethical AIpractices. Experience working in a startup or growth-stage company that has scaled tomaturity. Familiarity with localisation and global product considerations is a plus. More ❯
Crime Enhancement Project focused on Sanctions and PEP screening. What you'll do: Administer and configure LexisNexis Bridger Insight for sanctions and PEP screening workflows. Run screening jobs, manage dataingestion, and generate reports within Bridger. Set up users, permissions, and workflows tailored to project requirements. Collaborate with internal teams and external consultants to backfill and transition responsibilities. … Strong understanding of Sanctions and PEP screening processes. Background in Financial Crime, AML, or Compliance projects. Ability to manage screening engines, workflows, and user configurations. Comfortable running jobs, handling data files, and producing reports specific to Bridger functionality. Next steps We have a diverse workforce and an inclusive culture at M&G plc, underpinned by our policies and our More ❯
Crime Enhancement Project focused on Sanctions and PEP screening. What you'll do: Administer and configure LexisNexis Bridger Insight for sanctions and PEP screening workflows. Run screening jobs, manage dataingestion, and generate reports within Bridger. Set up users, permissions, and workflows tailored to project requirements. Collaborate with internal teams and external consultants to backfill and transition responsibilities. … Strong understanding of Sanctions and PEP screening processes. Background in Financial Crime, AML, or Compliance projects. Ability to manage screening engines, workflows, and user configurations. Comfortable running jobs, handling data files, and producing reports specific to Bridger functionality. Next steps We have a diverse workforce and an inclusive culture at M&G plc, underpinned by our policies and our More ❯
as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance Good Understanding of Airflow,Data Fusion and Data Flow Strong … Background and experience in Data Ingestions,Transformation,Modeling and Performance tuning. One migration Experience from Cornerstone to GCP will be added advantage Suppport the design and development of BigData echosystem Experience in building complex SQL Queries Strong Communication Skills More ❯
as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance Good Understanding of Airflow,Data Fusion and Data Flow Strong … Background and experience in Data Ingestions,Transformation,Modeling and Performance tuning. One migration Experience from Cornerstone to GCP will be added advantage Suppport the design and development of BigData echosystem Experience in building complex SQL Queries Strong Communication Skills More ❯
as of 12 months ending December 2024 totaled $13.8 billion. Experience : Minimum 10+ Years Strong Knowledge in Hadoop, Kafka, SQL/NoSQL Specialization in designing and implementing large-scale data pipelines, ETL processes, and distributed systems Should be able to work independenty with minimal help/guidance Good Understanding of Airflow,Data Fusion and Data Flow Strong … Background and experience in Data Ingestions,Transformation,Modeling and Performance tuning. One migration Experience from Cornerstone to GCP will be added advantage Suppport the design and development of BigData echosystem Experience in building complex SQL Queries Strong Communication Skills More ❯