balance cost, speed and data quality. Experimentation - set up offline metrics and online A/B tests; analyse uplift and iterate quickly. Production delivery - build scalable pipelines in AWS SageMaker (moving to Azure ML); containerise code and hook into CI/CD. Monitoring & tuning - track drift, response quality and spend; implement automated retraining triggers. Collaboration - work with Data Engineering … Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow)SQL (Redshift, Snowflake or similar)AWS SageMaker Azure ML migration, with Docker, Git, Terraform, Airflow/ADFOptional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at scale. More ❯
London, England, United Kingdom Hybrid / WFH Options
Endava
learning to extract actionable insights. This role requires strong expertise in Python-based AI/ML development, big data processing, and cloud-based AI platforms (Databricks, Azure ML, AWS SageMaker, GCP Vertex AI). Key Responsibilities Data Exploration & Feature Engineering Perform thorough Exploratory Data Analysis (EDA) and identify key variables, patterns, and anomalies. Engineer and select features for optimal … vision, NLP, and generative tasks using PyTorch, TensorFlow, or Transformer-based models. Model Deployment & MLOps Integrate CI/CD pipelines for ML models using platforms like MLflow, Kubeflow, or SageMaker Pipelines. Monitor model performance over time and manage retraining to mitigate drift. Business Insights & Decision Support Communicate analytical findings to key stakeholders in clear, actionable terms. Provide data-driven … responsible AI. Qualifications Technical Skills Programming: Python (NumPy, Pandas), R, SQL. ML/DL Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers. Big Data & Cloud: Databricks, Azure ML, AWS SageMaker, GCP Vertex AI. Automation: MLflow, Kubeflow, Weights & Biases for experiment tracking and deployment. Architectural Competencies Awareness of data pipelines, infrastructure scaling, and cloud-native AI architectures. Alignment of ML More ❯
role could be extended to a longer DevOps contract. What You'll Do - Design and build an end-to-end MLOps pipeline using AWS , with a strong focus on SageMaker for training, deployment, and hosting. - Integrate and operationalize MLflow for model versioning, experiment tracking, and reproducibility. - Architect and implement a feature store strategy for consistent, discoverable, and reusable features … across training and inference environments (e.g., using SageMaker Feature Store , Feast, or custom implementation). - Work closely with data scientists to formalize feature engineering workflows , ensuring traceability, scalability, and maintainability of features. - Develop unit, integration, and data validation tests for models and features to ensure stability and quality. - Establish model monitoring and alerting frameworks for real-time and batch … data teams to adopt new MLOps practices. What We're Looking For - 3+ years of experience in MLOps, DevOps, or ML infrastructure roles. - Deep familiarity with AWS services , especially SageMaker , S3, Lambda, CloudWatch, IAM, and optionally Glue or Athena. - Strong experience with MLflow , experiment tracking , and model versioning. - Proven experience setting up and managing a feature store , and driving More ❯
unsupervised learning, deep learning, andnatural language processing (NLP) Model development using frameworks such asTensorFlow,PyTorch, orscikit-learn Experience deploying AI models in production environments usingMLOpsprinciples (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA)platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA)tools: ServiceNow Workflow/… Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation servicesin one of or more of public cloud platforms: Azure(Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS(SageMaker, Lambda, Textract, Step Functions) GCP(Vertex AI, AutoML, Cloud Functions) Agile/Scrum and DevOps for iterative development and deployment CI/CD pipeline integration for automation and ML More ❯
tools (e.g., Docker, Kubernetes). Proficiency in programming languages such as Python, experience with AI/ML frameworks (e.g., TensorFlow, PyTorch), and experience with MLOps frameworks/tools (e.g. Sagemaker pipelines, Azure ML Studio, VertexAI, Kubeflow, MLFlow, Seldon, EvidentlyAI). What we offer Culture of caring: At GlobalLogic, we prioritize a culture of caring. Across every region and department More ❯
tools (e.g., Docker, Kubernetes). Proficiency in programming languages such as Python, experience with AI/ML frameworks (e.g., TensorFlow, PyTorch), and experience with MLOps frameworks/tools (e.g. Sagemaker pipelines, Azure ML Studio, VertexAI, Kubeflow, MLFlow, Seldon, EvidentlyAI). What we offer Culture of caring: At GlobalLogic, we prioritize a culture of caring. Across every region and department More ❯
Manchester, England, United Kingdom Hybrid / WFH Options
Fitch Group
classification, decision trees, support vector machines, and neural networks (deep learning experience strongly preferred) Knowledge of popular Cloud computing vendor (AWS and Azure) infrastructure & services e.g., AWS Bedrock, S3, SageMaker; Azure AI Search, OpenAI, blob storage, etc. Bachelor's degree (master's or higher strongly preferred) in machine learning, computer science, data science, applied mathematics or related technical field More ❯
practices. Key Responsibilities 1. Technical Leadership - Architect and deploy scalable ML models (e.g., dynamic pricing, demand forecasting, desirability scoring) using Python, PyTorch/TensorFlow, and cloud ML tools (AWS SageMaker, Databricks). - Define best practices for model governance, monitoring, and retraining in production. - Lead R&D into emerging techniques (e.g., graph neural networks for inventory routing, GenAI for buyer More ❯
practices. Key Responsibilities 1. Technical Leadership - Architect and deploy scalable ML models (e.g., dynamic pricing, demand forecasting, desirability scoring) using Python, PyTorch/TensorFlow, and cloud ML tools (AWS SageMaker, Databricks). - Define best practices for model governance, monitoring, and retraining in production. - Lead R&D into emerging techniques (e.g., graph neural networks for inventory routing, GenAI for buyer More ❯
tools (e.g., Docker, Kubernetes). • Proficiency in programming languages such as Python, experience with AI/ML frameworks (e.g., TensorFlow, PyTorch), and experience with MLOps frameworks/tools (e.g. Sagemaker pipelines, Azure ML Studio, VertexAI, Kubeflow, MLFlow, Seldon, EvidentlyAI). What We Offer Culture of Caring: At GlobalLogic, we prioritize a culture of caring. Across every region and department More ❯
London, England, United Kingdom Hybrid / WFH Options
causaLens
clusters. Good knowledge of DevOps tools and technologies, such as Helm, Docker, Terraform and CI/CD pipelines (GitHub Actions). Knowledge of MLOps especially on cloud environments: Vertex, Sagemaker, Synapse, is a huge plus. Strong Knowledge of the software development lifecycle (code review, version control, tooling, testing, etc.). Understanding of the full stack would be ideal (REST More ❯
London, England, United Kingdom Hybrid / WFH Options
Fitch Group, Inc., Fitch Ratings, Inc., Fitch Solutions Group
classification, decision trees, support vector machines, and neural networks (deep learning experience strongly preferred) Knowledge of popular Cloud computing vendor (AWS and Azure) infrastructure & services e.g., AWS Bedrock, S3, SageMaker; Azure AI Search, OpenAI, blob storage, etc. • Bachelor’s degree (master’s or higher strongly preferred) in machine learning, computer science, data science, applied mathematics or related technical field More ❯
a trajectory to success? Are you familiar with security best practices and compliance standards for AI applications? Do you want to be part of the team helping to establish Amazon Web Services as a leading technology AI platform? Would you like to be part of a team that genuinely focuses on understanding what's best for the customer and … also be valued within as a technical expert? Amazon Web Services is looking for a skilled and motivated Professional Services Senior Data Scientist to help accelerate our growing Data and AI business in the UK and work with our public sector customers. We need passionate, experienced consultants to help our citizens and the community benefit from the AI revolution. … ambiguity and mentoring junior teams. Expertise: Collaborate with field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Bedrock, Sagemaker, and other data services. Experience in architecture, engineering, software design and operations in hybrid environments as well as complex projects at scale. Solutions: Demonstrated consulting skills, ideally through previous More ❯
a trajectory to success? Are you familiar with security best practices and compliance standards for AI applications? Do you want to be part of the team helping to establish Amazon Web Services as a leading technology AI platform? Would you like to be part of a team that genuinely focuses on understanding what's best for the customer and … also be valued within as a technical expert? Amazon Web Services is looking for a skilled and motivated Professional Services Data Scientist to help accelerate our growing Data and AI business in the UK and work with our public sector customers. We need passionate, experienced consultants to help our citizens and the community benefit from the AI revolution. Candidates … customers deconstruct ambiguity. Responsibilities include: Expertise: Collaborate with field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Bedrock, Sagemaker, and other data services. Experience in architecture, software design and operations in hybrid environments as well as complex projects at scale. Solutions: Demonstrated consulting skills, ideally through previous roles More ❯
Ideal Candidate: 4+ years of experience in MLOps, DevOps, or software engineering roles. Strong programming skills in Python and familiarity with ML frameworks. Extensive experience with AWS services (e.g., SageMaker, ECS, Lambda) and cloud environments. Proficiency with containerization and orchestration tools (Docker, Kubernetes). Knowledge of data engineering concepts (e.g., ETL, data pipelines). Please note we can only More ❯
Ideal Candidate: 4+ years of experience in MLOps, DevOps, or software engineering roles. Strong programming skills in Python and familiarity with ML frameworks. Extensive experience with AWS services (e.g., SageMaker, ECS, Lambda) and cloud environments. Proficiency with containerization and orchestration tools (Docker, Kubernetes). Experience with version control systems and CI/CD pipelines. Knowledge of data engineering concepts More ❯
amount of growth and career opportunities. The ideal candidate must have: Strong Senior Data Engineer with AWS experience AWS data tooling such as S3/Glue/Redshift/SageMaker (Or relevant experience in another cloud technology). Strong experience in developing and automating scalable data pipelines in a Finance related data context. Must have a with a DataOps More ❯
strategies. Requirements : 4+ years of experience in MLOps, DevOps, or software engineering roles. Strong programming skills in Python and familiarity with ML frameworks. Extensive experience with AWS services (e.g., SageMaker, ECS, Lambda) and cloud environments. Proficiency with containerization and orchestration tools (Docker, Kubernetes). Experience with version control systems and CI/CD pipelines. Knowledge of data engineering concepts More ❯
London, England, United Kingdom Hybrid / WFH Options
Firemind
technical projects Experience delivering AI or ML solutions in client-facing or consulting settings Strong programming skills in Python and familiarity with AWS services such as S3, Lambda, and SageMaker Experience with Generative AI Models , including building, fine-tuning, and deploying generative models Deep understanding of Natural Language Processing (NLP) techniques like transformer models, attention mechanisms, tokenization, and embeddings More ❯
London, England, United Kingdom Hybrid / WFH Options
Tripledot Studios
non-technical stakeholders, effectively bridging the gap between technical and non-technical teams. Good command of analytical programming and visualization libraries (e.g., R, Matplotlib, ggplot) and supporting tools (e.g., Sagemaker, VS Code). Job Benefits 25 days paid holiday in addition to bank holidays to relax and refresh throughout the year. Hybrid Working: We work in the office More ❯
agent systems, reinforcement learning) and their applications in healthcare. Previous experience of training (fine turn) large language models, hands on experience with DeepSpeed Extensive experience with AWS services (e.g., SageMaker, Bedrock, MSK, EKS, OpenSearch). Proven record of shipping production level code with best software engineering practice Experience with containerization technologies, CI/CD, front and backend of web More ❯
access. Experience building underlying data pipelines and ETL, particularly useful if done using AWS or tools such as dbt. Experience working in an AWS Data Stack (AWS Glue, S3, Sagemaker, BedRock etc.). Application Requirements When applying for a position with Quotient Sciences to be able to work in our organisation you must be aged 18 years or over More ❯
Senior Consultant - AI/ML and Generative AI, Professional Services GCC The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business … junior team members, and guiding them on creating end to end AI solutions PREFERRED QUALIFICATIONS AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform … continuous training, small language model development, and implementation of Agentic AI systems Experience in creating strategy, roadmap, developing and deploying end-to-end machine learning and deep learning solutions Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity More ❯
Senior Consultant - AI/ML and Generative AI, Professional Services GCC The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business … in mentoring junior team members, and guiding them on creating end to end AI solutions - AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) - AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred - Experience with automation and scripting (e.g., Terraform More ❯
/hr W2 Responsibilities: Develop, optimize, and maintain data ingestion flows using Apache Kafka, Apache Nifi, and MySQL/PostgreSQL. Develop within AWS cloud services such as RedShift, SageMaker, API Gateway, QuickSight, and Athena. Coordinate with data owners to ensure proper configuration. Document SOPs related to streaming, batch configuration, or API management. Record details of data ingestion activities for … organizational levels. Analytical, organizational, and problem-solving skills. Experience with data observability tools like Grafana, Splunk, AWS CloudWatch, Kibana, etc. Knowledge of container technologies such as Docker, Kubernetes, and Amazon EKS. Education Requirements: Bachelor’s Degree in Computer Science, Engineering, or related field, or at least 8 years of equivalent work experience. 8+ years of IT data/system More ❯