Sponsor business or mission data, Sponsor applications, or Sponsor database structures. 3. Demonstrated experience with the Sponsor's data handling procedures 4. Demonstrated experience with visualization tools, such as Jupyter, Visio, SharePoint, or Confluence. 5. Demonstrated experience with APIs. 6. Demonstrated experience using GitHub, or similar version control systems. 7. Demonstrated experience using Python or R. 8. Bachelor's degree More ❯
Experience in developing analytic tools, processes, and governance for storing, modeling, capturing, and delivering data to the client's enterprise Experience with computational notebook software such as Zeppelin or Jupyter Experience with the application of visual analytics to computational analytic results Proficiency in one or more programming languages (e.g., Python, JavaScript, R, etc.) Experience with database querying like SQL Readiness More ❯
possess an active TS/SCI clearance. • Flexibility option: High Co-Location (3-days on-site Washington, DC) • Skills: Data Analytics, Data Science, Data Visualization, AWS, Machine Learning, GitLab, Jupyter Notebooks, Python, and R Studio. Salary Range: $110,000-$155,000 Your final salary offer will be based on several factors, including depth of technical skills, work experience, education, certifications More ❯
software, libraries, and packages in a Linux environment Extensive software development experience with Python and Java Experience with Big Data streaming platforms including Spark Experience with deploying and managing Jupyter Notebook environments Experience with data parsing/transformation technologies including JSON, XMl, CSV, and Parquet formats Experience with stream/batch Big Data processing and analytic frameworks Experience with CI More ❯
programming language like Python • Power BI, Qlik or other dashboarding software • Statistical programming (Python/R) and experience working with popular libraries and tools such as Pandas, SciPy, Seaborn, Jupyter/iPython notebooks, R Markdown. We will consider it an advantage if you also have familiarity with any of the following: • AWS • Microsoft Azure cloud computing • Familiarity with DataOps principles More ❯
programming language like Python • Power BI, Qlik or other dashboarding software • Statistical programming (Python/R) and experience working with popular libraries and tools such as Pandas, SciPy, Seaborn, Jupyter/iPython notebooks, R Markdown. We will consider it an advantage if you also have familiarity with any of the following: • AWS • Microsoft Azure cloud computing • Familiarity with DataOps principles More ❯
statistics across datasets. Develop and enhance analytical methodologies to support emerging sensor technologies and novel assay types. Develop GUIs for data analysis and exploration. Frameworks such as Plotly.js, Canvas.js, Jupyter Notebook, PySide6, NiceGUI, Dash, Kivy, or DearPyGUI is highly beneficial. Implement data models on Amazon Web Services (AWS) and contribute schemas and validation APIs to our database systems. Work cross More ❯
Location: 3-4 days onsite on NW DC Clearance Requirement: Active TS/SCI Experience: 4-6 Education: Bachelors Degree Required Skills: Data Analytics Data Science Data Visualization GitLab Jupyter Notebooks Python and R Studio Experience executing data science methods using python libraries for Data Cleaning/Wrangling, Exploratory Data Analysis (EDA), Statistical Analysis, Data Visualization. Strong proficiency in programming More ❯
Lexington, Massachusetts, United States Hybrid / WFH Options
Equiliem
React or similar frameworks. Provide guidance to less experienced front-end engineers. • General knowledge of machine learning and reinforcement learning concepts, frameworks, and environments, such as Pandas, TensorFlow, and Jupyter Notebook. • Broad knowledge of the general features, capabilities, and trade-offs of common data warehouse (e.g. Apache Hadoop); workflow orchestration (e.g. Apache Beam); data extract, transform and load (ETL); and More ❯
data analysis. Strong technical skills regarding data analysis, statistics, and programming. Strong working knowledge of, Python, Hadoop, SQL, and/or R. Working knowledge of Python data tools (e.g. Jupyter, Pandas, Scikit-Learn, Matplotlib). Ability to talk the language of statistics, finance, and economics a plus. Profound knowledge of the English language. In a changing world, diversity and inclusion More ❯
Knowledge of common data science software. We have a Python-first setup, but will if needed tackle problems using R, SAS, SQL, Spark, and work with tools such as Jupyter Notebooks, IDE's, Git, Microsoft Azure, Obviously, additional bonus points for PhD's, and more specific knowledge like e.g. Keras, D3.js The desire to learn and grow A positive, collaborative More ❯
Finance, Collections, Operations, and other stakeholders. What you'll need Excellent SQL skills. A drive to solve problems using data. Proficiency with the Python data science stack (pandas, NumPy, Jupyter notebooks, Plotly/matplotlib, etc.). Bonus skills include: Familiarity with Git. Experience with data visualization tools (Tableau, Looker, PowerBI, or equivalent). Knowledge of DBT. 2-5 years of More ❯
and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision projects. Use SQL to query and analyze the data. Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. Use machine learning and analytical techniques to create scalable solutions for business problems. Research and implement novel machine learning More ❯
have : Experience with Python Experience with JavaFX Experience with JavaScript Experience with Bash Experience with Perl Experience with XML Experience with Eclipse Experience with Visual Studio Code Experience with Jupyter Notebook Experience with OSGi Experience with Google Guice Experience with Unit testing Experience with JUnit Experience with Automated Testing Experience with Squish Experience with Jenkins Experience with Dev Ops Experience More ❯
the-art research areas (e.g., NLP, Transfer Learning, etc.), and modern Deep Learning algorithms (e.g., BERT, LSTM, etc.) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc.) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary, robust, and nonparametric statistics; hypothesis More ❯
preprocessing, model development and evaluation, and software integration Experience in software engineering in industry Experience bringing machine learning-based products from research to production Experience with scientific communication tools (Jupyter, Matplotlib) Research and software engineer experience demonstrated via an internship, contributions to open source, work experience, or coding competition About Meta: Meta builds technologies that help people connect, find communities More ❯
and the Hadoop Ecosystem Edge technologies e.g. NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages The following Technical Skills & Experience would be desirable for Data Devops Engineer: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology e.g. ActiveMQ NiFi Rego Familiarity with code development, shell-scripting in Python, Bash More ❯
Colorado Springs, Colorado, United States Hybrid / WFH Options
Lockheed Martin
Strong background in math and/or algorithm development • Strong Python programming experience • Experience with common software development and management tools such as Git, Nexus, JIRA, Confluence, VSCode and Jupyter • DevSecOps experience including CI/CD pipelines to build, test, and deploy • Effective oral and written communication skills Desired Skills: Desired Skills: Additional skills in the following areas are desired More ❯
and other Qualtrics products Acquire data from customers (usually sftp or cloud storage APIs) Validate data with exceptional detail orientation (including audio data) Perform data transformations (using Python and Jupyter Notebooks) Load the data via APIs or pre-built Discover connectors Advise our Sales Engineers and customers as needed on the data, integrations, architecture, best practices, etc. Build new AWS More ❯
this role if you are: Proficient in object-oriented programming with Java and Python (C++ is a plus) Experienced in developing and implementing analytic solutions Skilled in working with Jupyter Notebooks and Query Time Analytics Capable of creatively leveraging disparate datasets for intelligence and decision-making Comfortable working closely with customers to refine and enhance analytics A day in the … insights from various data sources Enriching data with corporate tools and repositories to improve decision-making processes Collaborating with team members and customers to enhance analytic capabilities Working within Jupyter Notebooks to test and refine queries and models Leveraging programming expertise to develop efficient and scalable analytics Must haves: TS/SCI clearance with polygraph Bachelor's degree in Computer … degree plus 10 years of experience, or High school diploma/GED plus 12 years of relevant experience Proficiency in Java and Python for analytics development Experience working with Jupyter Notebooks and MapReduce Strong analytical and problem-solving skills in a mission-driven environment Nice to have: Familiarity with customer corporate tools and data repositories Experience working in small, agile More ❯
Good skills with Python & Jupyter, as well as HTML to help maintain previous web page build. Evaluate and help enhance content analytics for machine translation systems using modern methods. Evaluate Natural Language Processing software to see which works best on customer datasets. Play an integral role in the development of an ML ops framework for cyber incident detection. This is More ❯
Experience using the Linux CLI, Bash/Python scripting ,creating Helm Charts for Kubernetes, developing services on Kubernetes and creating containerized applications/services using Docker Preferred Experience with Jupyter Notebooks, Gitlab CI/CD pipelines, updating containerized applications to address STE/STN requirements, developing with REST APIs and Atlassian tool suite (Bitbucket, Confluence, Jira) Willingness to learn new More ❯
design and implement data engineering and AI/ML infrastructure. Things we're looking for: Proficiency in data analysis, insights generation and using cloud-hosted tools (e.g., BigQuery, Metabase, Jupyter). Strong Python and SQL skills, with experience in data abstractions, pipeline management and integrating machine learning solutions. Adaptability to evolving priorities and a proactive approach to solving impactful problems More ❯
Demonstrated experience with the Sponsor's security and accreditation processes, to include building and maintaining systems that meet or exceed those processes. Demonstrated experience with visualization tools such as Jupyter, Excel, Visio, Sharepoint, and Confluence. Demonstrated experience in LINUX environments, networks, applications, and security operations. Demonstrated experience with networking and internet protocols, including TCP/IP, DNS, SMTP, HTTP and More ❯