model results. • Demonstrated experience with verification and validation test benches. • Demonstrated experience with Explainable AI (XAI) techniques. • Demonstrated experience with OpenNeural Net Exchange (ONNX). More ❯
and PyTorch. Hands-on experience with data pre-processing, feature extraction, and model optimization. Experience with edge computing and model deployment (e.g., TensorFlow Lite, ONNX Runtime). Strong problem-solving skills and ability to work independently or as part of a team. Excellent communication skills, both written and verbal. Commitment More ❯
communicating methodological choices and model results. • Demonstrated experience with verification and validation test benches. • Demonstrated experience with Explainable AI (XAI) techniques. • Demonstrated experience with ONNX (OpenNeural Net Exchange) Salary Range: $150,000-$200,000 All qualified applicants will receive consideration for employment without regard to race, color, religion, sex More ❯
of: CUDA, OpenCL, HIP, SYCL Knowledge of deep learning algorithms Interested in optimising tough linear algebra equations Knowledge of AI framework internals (PyTorch, TensorFlow, ONNX etc) Full details are available. Please don't hesitate to get in touch with max@ platform-recruitment. com to learn more. More ❯
e.g., CAN , MQTT ) Bonus Points: • Experience with autonomous mobile robots (AMRs) , AGVs , or robotic arms in logistics • Background in edge AI optimization (e.g., TinyML , ONNX Runtime More ❯
Experience building simple desktop applications using Wails, Electron, or Tauri. Familiarity with edge/fog computing principles. Familiarity with ML frameworks (e.g., Gorgonia, TensorFlow, ONNX) or prior exposure to integrating pre-trained models via API or runtime interfaces. Practical experience or strong knowledge of IoT and robotics is advantageous. Benefits More ❯
Experience building simple desktop applications using Wails, Electron, or Tauri. Familiarity with edge/fog computing principles. Familiarity with ML frameworks (e.g., Gorgonia, TensorFlow, ONNX) or prior exposure to integrating pre-trained models via API or runtime interfaces. Practical experience or strong knowledge of IoT and robotics is advantageous. Benefits More ❯
Experience building simple desktop applications using Wails, Electron, or Tauri. Familiarity with edge/fog computing principles. Familiarity with ML frameworks (e.g., Gorgonia, TensorFlow, ONNX) or prior exposure to integrating pre-trained models via API or runtime interfaces. Practical experience or strong knowledge of IoT and robotics is advantageous. Benefits More ❯
Riverside Overview Riverside Research is an independent National Security Nonprofit dedicated to research and development in the national interest. We provide high-end technical services, research and development, and prototype solutions to some of the country's most challenging technical More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯
or MLIR Work with hardware teams to ensure the software stack fully leverages the capabilities of our OTPU architecture Extend ML frameworks (e.g. PyTorch, ONNX, OpenXLA) to better support performance-critical inference paths Lead design reviews, mentor engineers, and promote best practices in HPC and performance engineering Stay on the … applications Hands-on experience with ML compilers (e.g. LLVM, MLIR), and knowledge of runtime and scheduling optimisations Practical knowledge of ML frameworks like PyTorch, ONNX, or OpenXLA, and how to optimise their execution Experience scaling AI workloads across clusters or custom infrastructure—not just deploying on standard cloud setups Strong More ❯