experience with Generative AI and exposure to leading LLM platforms (Anthropic, Meta, Amazon , OpenAI) Proficiency with essential data science libraries including Pandas, NumPy, scikit-learn, Plotly/Matplotlib, and Jupyter Notebooks Knowledge of ML-adjacent technologies, including AWS SageMaker and Apache Airflow. Strong skills in data preprocessing, wrangling, and augmentation techniques Experience deploying scalable AI solutions on cloud platforms (AWS More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
to strong results. Experienced in engaging with non-technical partners to scope, design and build an appropriate ML solution. Proficient with Python data science stack, e.g., pandas, scikit-learn, Jupyter etc., and version control, e.g., Git. Knowledge of OO programming, software design, i.e., SOLID principles, and testing practices. Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with More ❯
Reigate, Surrey, England, United Kingdom Hybrid / WFH Options
esure Group
to strong results. Experienced in engaging with non-technical stakeholders to scope, design and build an appropriate ML solution. Proficient with Python data science stack, e.g., pandas, scikit-learn, Jupyter etc., and version control, e.g., Git. Exposure to LLMOps, model monitoring principles, CI/CD and associated tech, e.g., Docker, MLflow, k8s, FastAPI etc. Knowledge of Langchain, huggingface and LLM More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
knowledge of React (not a frontend role, but an understanding of the stack is important) Hands-on experience with containerisation (Docker) and cloud deployment (Terraform, microservices, Azure) Exposure to Jupyter notebooks , and understanding of how machine learning models are developed and deployed Experience in fast-paced or start-up environments where you’ve contributed across the stack Background & Education Degree More ❯
Boot, Nifi, AWS, python, scala, shell scripting, and XML processing Experience in AWS solution architecture Experience in ElasticSearch, Vue.js, Java Spring Framework, and .NET Framework web services. Vue3, Typescript, Jupyter Notebook, Scala, Databricks Node.js, Angular JS, HTML, CSS, Java Script, Bootstrap, d3, AJAX, Visual Studio, other skills include RubyGems, Python, Redhat Linux Experience supporting users in bulk data processing with More ❯
with 10 years, bachelor's with 8 years, master's with 6 years, or PhD with 4 years Proficiency in data science languages and tools (e.g., Python, R, SQL, Jupyter, Pandas, Scikit-learn) Experience with machine learning frameworks (e.g., TensorFlow, PyTorch) and big data platforms (e.g., Spark, Hadoop) Strong background in statistics, data modeling, and algorithm development Ability to explain More ❯
learning models Build AI systems using Large Language Models Build processes for extracting, cleaning and transforming data (SQL/Python) Ad-hoc data mining for insights using Python + Jupyter notebooks Present insights and predictions in live dashboards using Tableau/PowerBI Lead the presentation of findings to clients through written documentation, calls and presentations Actively seek out new opportunities More ❯
e.g. Postgres, Oracle, SQLite, and/or Arc SDE o Web: e.g API's, GeoJson, REST Demonstrated experience with two or more cloud based technologies. o e.g. Docker containers, Jupyter Hub, Zeppelin, Apache-Spark, Centos, Jenkins, GIT Domain knowledge/IC: o Background in intelligence, defense, international relations, or public administration multi-disciplines. Demonstrated familiarity with the US Intelligence Community More ❯
language processing solutions. TECHNICAL SKILLS: AZURE IS A MUST HAVE Methodologies: Spiral, Agile, Waterfall, Lean. Programming Language: C++, Python, Shell, SQL, R. IDE Tools: PyCharm, RStudio, Visual Studio Code, Jupyter Notebook, Google Collab, Navicat. ML Frameworks: Transformers, Scikit-Learn, Keras, TensorFlow, PyTorch, ONNX, NLTK, OpenAI, langchain, llama-index,kore.ai. DL Architectures: LLM, ANN, CNN, R-CNN, RNN, GRU, LSTM, Transformers More ❯
enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication. Build and maintain Jupyter-based notebooks using platforms like SageMaker and MLFlow/Kubeflow on Kubernetes (EKS). Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build More ❯
like Apache Airflow, NiFi and Kafka Strong analytical and problem-solving skills Excellent communication and teamwork abilities Eagerness to learn and grow in a fast-paced environment Experience in Jupyter Notebooks, PostgreSQL. Experience with version control systems (e.g., Git) Desired Qualifications: Knowledge of data lake technologies and big data tools (e.g., Spark) Familiarity with containerization tools like Docker More ❯
taskings, or collection/processing workflows. Prior work with Sponsor data systems, applications, or database structures. Familiarity with Sponsor data handling procedures and clearance environments. Hands-on experience with Jupyter Notebooks, Visio, SharePoint, Confluence, or other visualization/documentation tools. Experience working with APIs, including integration and data extraction. Knowledge of version control systems such as GitHub. Proficiency in R More ❯
Washington, Washington DC, United States Hybrid / WFH Options
Gridiron IT Solutions
and the intel cycle. Familiarity and experience with the Department of Homeland Security (DHS). Ability, openness, and eagerness to learn. Skills: Data Analytics Data Science Data Visualization GitLab Jupyter Notebooks Python and R Studio Clearance: Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information. Compensation and Benefits More ❯
Chat or Chatbors, particular voice, is a must-have. Strong recent hands-on capabilities in Python are a must-have. Experience with a microservice environment using tools such as Jupyter, Pandas, Numpy, Fast API, and SQL alchemy is a must. Srong's understanding of deploying and supporting LLM's in a production environment is a must-have. An interest and More ❯
Chat or Chatbors, particular voice, is a must-have. Strong recent hands-on capabilities in Python are a must-have. Experience with a microservice environment using tools such as Jupyter, Pandas, Numpy, Fast API, and SQL alchemy is a must. Srong's understanding of deploying and supporting LLM’s in a production environment is a must-have. An interest and More ❯
Science development experience using Python • Proficiency in data science, machine learning, and analytics, including statistical data analysis, model and feature evaluations. • Strong proficiency in numpy & pandas. • Demonstrated skills with Jupyter Notebook or comparable environments • Practical experience in solving complex problems in an applied environment, and proficiency in critical thinking. • Candidates require a TS to start. TS/SCI with Polygraph More ❯
into scalable solutions. Essential Experience & Skills 5+ years' engineering experience, with at least 3 years in ML Ops, Data Engineering, or AI infrastructure. Strong Python engineering skills (Pandas, Numpy, Jupyter, FastAPI, SQLAlchemy). Expertise in AWS services (certification desirable). Proven experience deploying and supporting LLMs in production. Strong understanding of LLM fine-tuning (PyTorch, TensorFlow, Hugging Face Trainer, etc. More ❯
London, South East, England, United Kingdom Hybrid / WFH Options
Salt Search
into scalable solutions. Essential Experience & Skills 5+ years' engineering experience, with at least 3 years in ML Ops, Data Engineering, or AI infrastructure. Strong Python engineering skills (Pandas, Numpy, Jupyter, FastAPI, SQLAlchemy). Expertise in AWS services (certification desirable). Proven experience deploying and supporting LLMs in production. Strong understanding of LLM fine-tuning (PyTorch, TensorFlow, Hugging Face Trainer, etc. More ❯
requirements: Bachelor's Degree Part time or full time: Full Time Flexibility option: High Co-Location (3-4 days on-site) Skills: Data Analytics Data Science Data Visualization GitLab Jupyter Notebooks Python and R Studio More ❯
cloud platforms (AWS, GCP, Azure). Proficiency in containerization and orchestration (Docker, Kubernetes). Expertise in managing and optimizing cloud-based systems at scale. Preferred Skills Familiarity with Python (Jupyter) and ML frameworks (e.g., PyTorch). Knowledge of monitoring tools (Prometheus, Grafana). Experience with cloud-based databases (RDS, Aurora, Redshift, Cloud SQL, etc.) and visualization tools (QuickSight, Superset). More ❯
requirements: Bachelor's Degree • Part time or full time: Full Time • Flexibility option: High Co-Location (3-4 days on-site) ,Skills: • Data Analytics • Data Science • Data Visualization • GitLab • Jupyter Notebooks • Python and R Studio More ❯
OGA Systems on the Unclassified side, Internal FBI Systems, etc.) HTML, Vue Framework, Javascript, Java and .NET Apache Solr .NET, C#, Javascript, Java, Perl , .NET, C#, Vue3, Typescript, Python, Jupyter Notebook, Scala, Databricks Experience in ElasticSearch, Vue.js, Java Spring Framework, and .NET Framework web services. Oracle, API Architecture (SOAP/REST), Git, REST APIs using technologies such as ASP.NET Core More ❯
other tasks typically associated with the Software Development Lifecycle. What you'll bring to the role and MHR Advanced Python experience e.g. Poetry, FastAPI, LlamaIndex, OpenAI, Pydantic, Dependency Injector, Jupyter Notebooks, Pandas, NumPy, Scikit-learn, SciPy, Plotly Experience with Azure AI Studio Vector Database, e.g. Qdrant,ChromaDB Databricks LLMs Prompt Engineering for LLMs Experience of testing and deployment of data More ❯
dissemination Demonstrated experience working with Sponsor business or mission data, Sponsor applications, or Sponsor database structures Experience with the Sponsor's data handling procedures Demonstrated visualization tools, such as Jupyter, Visio, SharePoint, or Confluence Proficiency with APIs Demonstrated experience using GitHub, or similar version control systems Experience using Python or R Able to utilize critical thinking and analytic judgements and More ❯