with data privacy regulations. Technical Competencies The role is a hands-on technical leadership role with advanced experience in at least most of the following technologies Cloud Platforms: AWS (Amazon Web Services): Knowledge of services like S3, EC2, Lambda, RDS, Redshift, EMR, SageMaker, Glue, and Kinesis. Azure: Proficiency in services like Azure Blob Storage, Azure Data Lake, VMs … Kafka, and Apache Flink. ETL Tools: AWS Glue, Azure Data Factory, Talend, and Apache Airflow. AI & Machine Learning: Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, and MXNet. AI Services: AWS SageMaker, Azure Machine Learning, Google AI Platform. DevOps & Infrastructure as Code: Containerization: Docker and Kubernetes. Infrastructure Automation: Terraform, Ansible, and AWS CloudFormation. API & Microservices: API Development: RESTful API design and … Formation, Azure Purview. Data Security Tools: AWS Key Management Service (KMS), Azure Key Vault. Data Analytics & BI: Visualization Tools: Tableau, Power BI, Looker, and Grafana. Analytics Services: AWS Athena, Amazon QuickSight, Azure Stream Analytics. Development & Collaboration Tools: Version Control: Git (and platforms like GitHub, GitLab). CI/CD Tools: Jenkins, Travis CI, AWS CodePipeline, Azure DevOps. Other Key More ❯
London, England, United Kingdom Hybrid / WFH Options
BBC
and maintaining tools that support data science and MLOps/LLMOps workflows. Collaborate with Data Scientists to deploy, serve, and monitor LLMs in real-time and batch environments using AmazonSageMaker, Bedrock Implement Infrastructure-as-Code with AWS CDK, CloudFormation to provision and manage cloud environments. Build and maintain CI/CD pipelines using GitHub Actions, AWS CodePipeline …/MLOps experience with a strong focus on building and delivering scalable infrastructure for ML and AI applications using Python and cloud native technologies Experience with cloud services, especially Amazon Web Services (AWS) - SageMaker, Bedrock, S3, EC2, Lambda, IAM, VPC, ECS/EKS. Proficiency in Infrastructure-as-Code using AWS CDK or CloudFormation. Experience implementing and scaling MLOps … workflows with tools such as MLflow, SageMaker Pipelines. Proven experience building, containerising, and deploying using Docker and Kubernetes. Hands-on experience with CI/CD tools (GitHub Actions, CodePipeline, Jenkins) and version control using Git/GitHub. Strong understanding of DevOps concepts including blue/green deployments, canary releases, rollback strategies, and infrastructure automation. Familiarity with security and compliance More ❯
and maintaining tools that support data science and MLOps/LLMOps workflows. Collaborate with Data Scientists to deploy, serve, and monitor LLMs in real-time and batch environments using AmazonSageMaker, Bedrock Implement Infrastructure-as-Code with AWS CDK, CloudFormation to provision and manage cloud environments. Build and maintain CI/CD pipelines using GitHub Actions, AWS CodePipeline …/MLOps experience with a strong focus on building and delivering scalable infrastructure for ML and AI applications using Python and cloud native technologies Experience with cloud services, especially Amazon Web Services (AWS) - SageMaker, Bedrock, S3, EC2, Lambda, IAM, VPC, ECS/EKS. Proficiency in Infrastructure-as-Code using AWS CDK or CloudFormation. Experience implementing and scaling MLOps … workflows with tools such as MLflow, SageMaker Pipelines. Proven experience building, containerising, and deploying using Docker and Kubernetes. Hands-on experience with CI/CD tools (GitHub Actions, CodePipeline, Jenkins) and version control using Git/GitHub. Strong understanding of DevOps concepts including blue/green deployments, canary releases, rollback strategies, and infrastructure automation. Familiarity with security and compliance More ❯
and maintaining tools that support data science and MLOps/LLMOps workflows. Collaborate with Data Scientists to deploy, serve, and monitor LLMs in real-time and batch environments using AmazonSageMaker, Bedrock Implement Infrastructure-as-Code with AWS CDK, CloudFormation to provision and manage cloud environments. Build and maintain CI/CD pipelines using GitHub Actions, AWS CodePipeline …/MLOps experience with a strong focus on building and delivering scalable infrastructure for ML and AI applications using Python and cloud native technologies Experience with cloud services, especially Amazon Web Services (AWS) - SageMaker, Bedrock, S3, EC2, Lambda, IAM, VPC, ECS/EKS. Proficiency in Infrastructure-as-Code using AWS CDK or CloudFormation. Experience implementing and scaling MLOps … workflows with tools such as MLflow, SageMaker Pipelines. Proven experience building, containerising, and deploying using Docker and Kubernetes. Hands-on experience with CI/CD tools (GitHub Actions, CodePipeline, Jenkins) and version control using Git/GitHub. Strong understanding of DevOps concepts including blue/green deployments, canary releases, rollback strategies, and infrastructure automation. Familiarity with security and compliance More ❯
or Databricks. Machine Learning Mastery : Proficiency in TensorFlow, PyTorch, and scikit-learn for designing, training, and deploying ML models. Cloud Expertise : Hands-on experience with Azure Machine Learning, AWS SageMaker, or GCP Vertex AI for scalable AI deployments. Data Visualization : Proficiency in tools like Tableau, Power BI, and Python-based libraries such as Matplotlib, Seaborn, and Plotly. Statistics and … and maintain models in production environments using MLOps pipelines with tools like MLflow or Kubeflow. Cloud Deployment : Implement AI workflows on cloud platforms such as Azure Machine Learning, AWS SageMaker, or GCP Vertex AI with scalability and performance in mind. Research and Innovation Stay updated on advancements in AI/ML technologies and incorporate state-of-the-art practices More ❯
Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of distributed systems and microservices architecture Expert in data pipeline platforms (Apache Kafka, Airflow, Spark) Proficient in both SQL (PostgreSQL, MySQL) and NoSQL (Elasticsearch, MongoDB More ❯
Milvus) and embedding technologies Expert in building RAG (Retrieval-Augmented Generation) systems at scale Strong experience with MLOps practices and model deployment pipelines Proficient in cloud AI services (AWS SageMaker/Bedrock) Deep understanding of distributed systems and microservices architecture Expert in data pipeline platforms (Apache Kafka, Airflow, Spark) Proficient in both SQL (PostgreSQL, MySQL) and NoSQL (Elasticsearch, MongoDB More ❯
developing and deploying data-driven solutions, with a particular focus on OCR use-cases and LLM applications within AWS environments. Key Responsibilities: - AWS Data Science Tools: Hands-on with SageMaker, Lambda, Step Functions, S3, Athena. - OCR Development: Experience with Amazon Textract, Tesseract, and LLM-based OCR. - Python Expertise: Skilled in Pandas, NumPy, scikit-learn, PyTorch, Hugging Face Transformers … GDPR), PII handling. - Agile Working: Experience in Agile/Scrum teams (Jira, Azure DevOps). Essential Skills & Experience: - 5-7 years in a Data Science role - Strong experience with Amazon Bedrock and SageMaker - Python integration with APIs (e.g., ChatGPT) - Demonstrable experience with LLMs in AWS - Proven delivery of OCR and document parsing pipelines More ❯
interested in building data and science solutions to drive strategic direction? Based in Tokyo, the Science and Data Technologies team designs, builds, operates, and scales the data infrastructure powering Amazon's retail business in Japan. Working with a diverse, global team serving customers and partners worldwide, you can make a significant impact while continuously learning and experimenting with cutting … working with large-scale data, excels in highly complex technical environments, and above all, has a passion for data. You will lead the development of data solutions to optimize Amazon's retail operations in Japan, turning business needs into robust data pipelines and architecture. Leveraging your deep experience in data infrastructure and passion for enabling data-driven business impact … and business teams to identify and implement strategic data opportunities. Key job responsibilities Your key responsibilities include: - Create data solutions with AWS services such as Redshift, S3, EMR, Lambda, SageMaker, CloudWatch etc. - Implement robust data solutions and scalable data architectures. - Develop and improve the operational excellence, data quality, monitoring and data governance. BASIC QUALIFICATIONS - Bachelor's degree in computer More ❯
for data insights Data Bricks/Data QISQL for data access and processing (PostgreSQL preferred, but general SQL knowledge is important) Latest Data Science platforms (e.g., Databricks, Dataiku, AzureML, SageMaker) and frameworks (e.g., TensorFlow, MXNet, scikit-learn) Software engineering practices (coding standards, unit testing, version control, code review) Hadoop distributions (Cloudera, Hortonworks), NoSQL databases (Neo4j, Elastic), streaming technologies (Spark More ❯
ML models for tasks such as decision support, automation, anomaly detection, and pattern recognition. • Select and implement appropriate modeling techniques using Python, Spark, or cloud-native ML frameworks (e.g., SageMaker, MLflow). • Maintain reproducibility and interpretability of model outputs to meet mission transparency and audit requirements. • Package model inference services with well-documented APIs for integration into end-user More ❯
balance cost, speed and data quality. Experimentation - set up offline metrics and online A/B tests; analyse uplift and iterate quickly. Production delivery - build scalable pipelines in AWS SageMaker (moving to Azure ML); containerise code and hook into CI/CD. Monitoring & tuning - track drift, response quality and spend; implement automated retraining triggers. Collaboration - work with Data Engineering … Product and Ops teams to translate business constraints into mathematical formulations. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at scale. Strong … know-how - LP/MIP, heuristics, constraint tuning, objective-function design. Python toolbox: pandas, NumPy, scikit-learn, PyTorch/TensorFlow; clean, tested code. Cloud ML: hands-on with AWS SageMaker plus exposure to Azure ML; Docker, Git, CI/CD, Terraform. SQL mastery for heavy-duty data wrangling and feature engineering. Experimentation chops - offline metrics, online A/B More ❯
balance cost, speed and data quality. Experimentation - set up offline metrics and online A/B tests; analyse uplift and iterate quickly. Production delivery - build scalable pipelines in AWS SageMaker (moving to Azure ML); containerise code and hook into CI/CD. Monitoring & tuning - track drift, response quality and spend; implement automated retraining triggers. Collaboration - work with Data Engineering … Data Factory) job. Review: inspect dashboards, compare control vs. treatment, plan next experiment. Tech stack Python (pandas, NumPy, scikit-learn, PyTorch/TensorFlow) SQL (Redshift, Snowflake or similar) AWS SageMaker → Azure ML migration, with Docker, Git, Terraform, Airflow/ADF Optional extras: Spark, Databricks, Kubernetes. What you'll bring 3-5+ years building optimisation or recommendation systems at … know-how - LP/MIP, heuristics, constraint tuning, objective-function design. Python toolbox: pandas, NumPy, scikit-learn, PyTorch/TensorFlow; clean, tested code. Cloud ML: hands-on with AWS SageMaker plus exposure to Azure ML; Docker, Git, CI/CD, Terraform. SQL mastery for heavy-duty data wrangling and feature engineering. Experimentation chops - offline metrics, online A/B More ❯
role could be extended to a longer DevOps contract. What You'll Do - Design and build an end-to-end MLOps pipeline using AWS , with a strong focus on SageMaker for training, deployment, and hosting. - Integrate and operationalize MLflow for model versioning, experiment tracking, and reproducibility. - Architect and implement a feature store strategy for consistent, discoverable, and reusable features … across training and inference environments (e.g., using SageMaker Feature Store , Feast, or custom implementation). - Work closely with data scientists to formalize feature engineering workflows , ensuring traceability, scalability, and maintainability of features. - Develop unit, integration, and data validation tests for models and features to ensure stability and quality. - Establish model monitoring and alerting frameworks for real-time and batch … data teams to adopt new MLOps practices. What We're Looking For - 3+ years of experience in MLOps, DevOps, or ML infrastructure roles. - Deep familiarity with AWS services , especially SageMaker , S3, Lambda, CloudWatch, IAM, and optionally Glue or Athena. - Strong experience with MLflow , experiment tracking , and model versioning. - Proven experience setting up and managing a feature store , and driving More ❯
have hands-on experience in building and deploying Generative AI solutions using state-of-the-art techniques like Retrieval-Augmented Generation (RAG), LLM fine-tuning, and deployment via AWS SageMaker and Bedrock. Youll work with cross-functional teams to deliver intelligent systems that improve user experiences and business outcomes across enterprise platforms. Your future duties and responsibilities: Design, develop … pipelines by integrating vector databases (e.g., FAISS, Pinecone, OpenSearch). Perform LLM fine-tuning and prompt optimization for domain-specific use cases. Build and manage AI workflows on AWS SageMaker, Bedrock, and other cloud-native services. Develop clean, scalable, and reusable code using Python and modern ML frameworks (e.g. LangChain, Transformers). Collaborate with data engineers and MLOps teams … including retrieval strategies, embeddings, and semantic search. Hands-on experience in fine-tuning LLMs using Hugging Face, LoRA, or PEFT techniques. Strong familiarity with AWS AI/ML stack: SageMaker, Bedrock, Lambda, API Gateway, Step Functions. Knowledge of prompt engineering, embedding generation, and vector database integration. Exposure to deploying Gen AI models in containerized environments (Docker, EKS/ECS More ❯
/ML Architect or Lead AI Engineer Strong knowledge of AI/ML principles, model lifecycle, and MLOps best practices Hands-on experience with cloud services (Azure ML, AWS SageMaker, GCP Vertex AI) Proficient in Python and common ML libraries (TensorFlow, PyTorch, Scikit-learn) Familiarity with data engineering workflows and tools (Spark, Airflow, Databricks, etc.) Experience with GenAI, LLMs More ❯
San Antonio, Texas, United States Hybrid / WFH Options
IAMUS
higher certification. Willingness to do a programming challenge during the interview process. Experience with Jupyter Notebooks, Python Data Science Libraries. Experience with ML-OPS and related tools (e.g., MLFLOW, Sagemaker, Bedrock) and ability to build interactive, insightful dashboards for monitoring ML models. Place of Performance: Hybrid work in San Antonio, TX. Desired Skills (Optional) Experience with NOSQL databases such More ❯
top-tier performance. Collaborating with cross-functional teams (data scientists, software engineers, product managers) to turn business needs into technical solutions. Building and maintaining scalable ML pipelines using AWS (SageMaker, EC2, S3, Lambda, Rekognition). Staying up to date with the latest in ML and computer vision research and applying new techniques to real-world challenges. Contributing to the More ❯
Reston, Virginia, United States Hybrid / WFH Options
CGI
the requests library. Exposure to Lang Chain or MCP. Experience with one or more of the following AI Platforms: AWS Sage Maker Google Vertex AI OpenAI API Azure OpenAI Amazon Bedrock Cohere API Databricks Azure Machine Learning Replicate Hugging Face APIs Education: Bachelors degree in Computer Science, Software Engineering, or a related technical field. Other Information: CGI is required More ❯
DESCRIPTION The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer … candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most … scientific field - Experience using Python and frameworks such as PyTorch, TensorFlow, and practical experience in solving complex problems in an applied environment - Experiences related to AWS services such as SageMaker, EMR, S3, DynamoDB and EC2, as well as experience with machine learning, deep learning, NLP, generative AI, distributed training, and model hosting Amazon is an equal opportunity employer More ❯
optimize, and maintain data ingest flows using Apache Kafka, Apache Nifi and MySQL/PostGreSQL Develop within the components in the AWS cloud platform using services such as RedShift, SageMaker, API Gateway, QuickSight, and Athena Communicate with data owners to set up and ensure configuration parameters Document SOP related to streaming configuration, batch configuration or API management depending on … and problem-solving skills Experience in instituting data observability solutions using tools such as Grafana, Splunk, AWS CloudWatch, Kibana, etc. Experience in container technologies such as Docker, Kubernetes, and Amazon EKS Qualifications: Ability to obtain an Active Secret clearance or higher Bachelors Degree in Computer Science, Engineering, or other technical discipline required, OR a minimum of 8 years equivalent More ❯
amount of growth and career opportunities. The ideal candidate must have: Strong Senior Data Engineer with AWS experience AWS data tooling such as S3/Glue/Redshift/SageMaker (Or relevant experience in another cloud technology). Strong experience in developing and automating scalable data pipelines in a Finance related data context. Must have a with a DataOps More ❯
delivery • Proven track record deploying ML systems in production at scale (batch and/or real-time) • Strong technical background in Python and ML engineering tooling (e.g. MLflow, Airflow, SageMaker, Vertex AI, Databricks) • Understanding of infrastructure-as-code and CI/CD for ML systems (e.g. Terraform, GitHub Actions, ArgoCD) • Ability to lead delivery in agile environments-balancing scope More ❯
delivery • Proven track record deploying ML systems in production at scale (batch and/or real-time) • Strong technical background in Python and ML engineering tooling (e.g. MLflow, Airflow, SageMaker, Vertex AI, Databricks) • Understanding of infrastructure-as-code and CI/CD for ML systems (e.g. Terraform, GitHub Actions, ArgoCD) • Ability to lead delivery in agile environments—balancing scope More ❯
delivery Proven track record deploying ML systems in production at scale (batch and/or real-time) Strong technical background in Python and ML engineering tooling (e.g. MLflow, Airflow, SageMaker, Vertex AI, Databricks) Understanding of infrastructure-as-code and CI/CD for ML systems (e.g. Terraform, GitHub Actions, ArgoCD) Ability to lead delivery in agile environments—balancing scope More ❯