streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. ABOUT THE TEAM The Maritime & Manoeuvre Dominance teams at Anduril UK develop operationally relevant, multi-asset autonomy. … filters, systems integration, and more, all whilst making pragmatic engineering trade-offs up and down the whole stack to do whatever it takes to go from streams of raw sensor measurements to a stable coherent combined operational picture which enables both autonomous systems and end users to complete their missions. WHAT YOU'LL DO Contribute to the full software … Research and adapt SOTA perception algorithms - everything from lit. review, through training and evaluation, to optimisation and deployment. Design and develop robust and efficient C Rust software for multi-sensor, multi-target tracking & estimation problems. MLOPS - build and maintain automated, scalable infrastructure to manage our data and run our experiments. Design experiments, data collection efforts, and curate training/ More ❯
streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities … streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. The Maneuver Dominance team at Anduril develops operationally relevant, multi-asset autonomy. We are focused on making More ❯
projects Technical depth in machine learning, deep learning architectures (e.g., CNNs, Transformers), and computer vision algorithms Experience developing and deploying perception algorithms (e.g., object detection, segmentation, tracking, SLAM, multi-sensorfusion) in real-world applications Proficiency in Python and C++ Experience with deep learning frameworks (e.g., PyTorch, TensorFlow) and MLOps tools/platforms Communication, interpersonal, and collaboration skills More ❯
streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. ABOUT THE TEAM The Maritime & Manoeuvre Dominance teams at Anduril UK develop operationally relevant, multi-asset autonomy. … a Linux development environment. Eligible to obtain and maintain an active UK Security Clearance PREFERRED QUALIFICATIONS Experience in one or more of the following: perception, computer vision, image segmentation, sensor integration and characterisation, motion planning, localisation, mapping, and related system performance metrics. Experience in on-vehicle perception, computer vision on a dynamic platform. Experience developing software on embedded hardware More ❯
mapping across satellites and drones operating in GPS denied, dynamic environments. Build the Perception stack of our Spatial AI architectures, fusing visual, inertial, and depth cues for robust, multi sensor scene understanding. Integrate sensorfusion & neural representations to create dense onboard world models that run in real time on resource constrained hardware. Deploy semantic scene understanding, visual … What we are looking for M.S. in Computer Vision/Robotics, or a related field plus . 3+ of industry experience (or a PhD). Expertise in multimodal perception & sensorfusion, neural representations, semantic scene understanding, SLAM/camera pose estimation, monocular depth estimation, visual place recognition. Strong software engineering skills in C++ and Python, including performance critical More ❯
on experience with building a perception stack for autonomous systems Experience with model deployment with NVIDIA stack (e.g. ONNX graphs, TensorRT, profiling) Experience with PyTorch and Computer Vision for sensorfusion (e.g. BEV representations) Experience with embedded ML platforms and real-time OSes WHAT WE OFFER We are committed to creating a modern work environment that supports our More ❯
on experience with building a perception stack for autonomous systems Experience with model deployment with NVIDIA stack (e.g. ONNX graphs, TensorRT, profiling) Experience with PyTorch and Computer Vision for sensorfusion (e.g. BEV representations) Experience with embedded ML platforms and real-time OSes What We Offer We are committed to creating a modern work environment that supports our More ❯
Proven leadership in driving ML projects from research to production Bonus: Prior work on end-to-end autonomous driving architectures (e.g., imitation learning, behavior cloning, world models) Experience with sensorfusion (LiDAR, camera, radar) in a learned model Equal Opportunity Rivian is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices More ❯
knowledge of generative modelling (e.g., auto-regressive, diffusion, or VAEs), supervised learning. You have experience building large-scale ML infrastructure and working with high-dimensional temporal data (e.g., video, sensorfusion). You have strong Python and PyTorch engineering fundamentals, and experience building research-grade production tools. You have a track record of delivering ML systems that translate More ❯
streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. Joining the Technical Operations team, you will be at the sharp end of deploying and supporting Anduril More ❯
additional sensing or computing behaviours. Application of low-level behaviour through to higher level control. Implementing and troubleshooting of Drone CAN/UAVCAN, serial and IP communication protocols for sensor, actuator and controller integration. Through-life testing of UAS systems and sub-systems from initial sub-system prototype to full development platforms. Collaborating with cross-functional teams to ensure … Proficiency in Drone CAN (or UAVCAN), IP networking and serial communications for integrating sensors, actuators, and other peripherals. Strong knowledge of ROS (Robot Operating System) for robotic control and sensor fusion. Hands-on experience with C++, Python, and embedded system development. Strong knowledge and experience with systems engineering in the context of complex autonomous systems. Knowledge of safety related More ❯
seeking a Software Control Teleoperation Engineer to develop cutting-edge real-time teleoperation systems for humanoid robots. This role focuses on designing and implementing low-latency control algorithms, integrating sensor data for intuitive remote control, and enhancing teleoperation with semi-autonomous features. A deep understanding of robot control principles and hands-on experience with real robotic hardware are essential … and implement low-latency control pipelines to ensure highly responsive teleoperation. Develop semi-autonomous features, blending human input with autonomous control for enhanced precision and ease of use. Ensure sensor data fusion and create effective visualization tools for intuitive remote control. Implement fail-safe mechanisms, including emergency stop protocols and signal loss handling. Conduct stress tests to ensure … hardware in laboratory or production environments. Experience with inverse kinematics, motion planning, and trajectory generation. Familiarity with ROS 2 (Robot Operating System) and real-time communication protocols. Knowledge of sensorfusion techniques, particularly for teleoperation applications. Experience in designing robust fail-safe systems for real-time control. Experience developing modular and scalable software architectures for robotic systems. Strong More ❯
mapping across satellites and drones operating in GPS denied, dynamic environments. Build the Perception stack of our Spatial AI architectures, fusing visual, inertial, and depth cues for robust, multi sensor scene understanding. Integrate sensorfusion & neural representations to create dense onboard world models that run in real time on resource constrained hardware. Deploy semantic scene understanding, visual … setups, and real missions. What we are looking for M.S. in Computer Vision/Robotics, or a related field. Expertise in at least two of the following: multimodal perception & sensorfusion, neural representations, semantic scene understanding, SLAM/camera pose estimation, monocular depth estimation, visual place recognition. Strong software engineering skills in C++ and Python, including performance critical More ❯
even failed projects contain valuable insights. You will be building upon cutting-edge ML techniques such as transformers and reinforcement learning to create novel multi-modal solutions. Examples include sensorfusion systems, physics-informed neural networks for simulations, and multi-purpose autonomous robots. Projects will be defence focused but may include offensive capabilities. **Please note, as projects are … is an 18-month contract with the expectation of extending this as more funding is released. Keywords: AI, ML, RF, EM, GNN, Transformer, Autoencoder, Reinforced Learning, Multi-Modal AI, SensorFusion, Python, PyTorch, Radio Frequency, RF Another top job from ECM, the high-tech recruitment experts. Even if this job's not quite right, do contact us now More ❯
London, England, United Kingdom Hybrid / WFH Options
Wayve Technologies Ltd
developing, validating, and deploying machine learning-powered methods for 3D spatial understanding in dynamic, real-world environments. The ideal candidate has a strong scientific background in 3D vision and sensorfusion, experience applying these methods at scale, and a deep appreciation for translating cutting-edge research into high-performance, production-grade systems. If you are passionate about next … spatial perception systems (e.g., metrics, validation strategies, ablations). Hands-on experience with SLAM/SfM systems using a combination of traditional and learning-based approaches, ideally using multimodal sensor data (e.g., RGB, LIDAR, RADAR, IMU). A strong data mindset: enthusiasm for high-quality data pipelines, large-scale datasets, curation practices, and scaling laws. Motivation to take research More ❯
autonomous systems. Familiarity with recent ML breakthroughs, such as foundation models, pre-training, fine-tuning, and multimodal Transformers. Experience with large-scale distributed training. Nice to Have Experience with sensorfusion for perception stacks, e.g., BEV representations. Experience with NVIDIA deployment tools like ONNX, TensorRT, profiling. Knowledge of embedded ML platforms and real-time OSes. What We Offer More ❯
for algorithm development, testing, and deployment. Experience in topics like model-free RL, imitation learning, or hybrid control systems that combine classic and modern methods. Preferred Qualifications : Experience with sensorfusion for state estimation (IMUs, joint encoders, force/torque sensors). Understanding of actuators dynamics and modeling, and limitations. High competitive salary. 23 working days of vacation More ❯
robotic system design, including kinematics, dynamics, control algorithms, and perception systems. Proficiency in hardware-software co-design, real-time computing, and middleware frameworks Experience with AI/ML integration, sensorfusion, and autonomous decision-making systems. Familiarity with embedded systems, microcontrollers, FPGAs, and high-performance computing platforms. Understanding of wireless communication protocols, power management strategies, and battery technologies. More ❯
robotic system design, including kinematics, dynamics, control algorithms, and perception systems. Proficiency in hardware-software co-design, real-time computing, and middleware frameworks Experience with AI/ML integration, sensorfusion, and autonomous decision-making systems. Familiarity with embedded systems, microcontrollers, FPGAs, and high-performance computing platforms. Understanding of wireless communication protocols, power management strategies, and battery technologies. More ❯
robotic system design, including kinematics, dynamics, control algorithms, and perception systems. Proficiency in hardware-software co-design, real-time computing, and middleware frameworks Experience with AI/ML integration, sensorfusion, and autonomous decision-making systems. Familiarity with embedded systems, microcontrollers, FPGAs, and high-performance computing platforms. Understanding of wireless communication protocols, power management strategies, and battery technologies. More ❯
RL, imitation learning, or hybrid control systems that combine classic and modern methods. Familiarity with real-time control systems and integration with hardware, including actuators and sensors. Expertise in sensorfusion for state estimation (IMUs, joint encoders, force/torque sensors) to enable robust balance and locomotion control. Additional Qualifications : Understanding of actuators dynamics and modeling. Deep knowledge More ❯
streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensorfusion, and networking technology to the military in months, not years. Since 2023, Anduril UK has experienced rapid growth, introducing world-leading software-first, hardware-enabled systems to … Technical Aptitude and Intellectual Curiosity. We are first and foremost a technology company, working at the leading edge of capabilities like machine learning, autonomy, distributed networking, and multi-modal sensor fusion. Do you have a natural desire to see beyond simple cause and effect relationships to really understand how complex systems operate? Do you actively seek out opportunities to More ❯
Work alongside SLAM & software engineering experts to innovate in applied Spatial AI. Requirements: Strong background in geometric computer vision, state estimation, or SLAM. Expertise in optimisation, numerical linear algebra, & sensor fusion. Industrial experience in deploying SLAM solutions. Proficiency in C++. Desirable experience: PhD in computer vision or robotics. Experience with machine learning techniques for geometric & semantic estimation. GPU programming More ❯
City of London, London, United Kingdom Hybrid / WFH Options
European Tech Recruit
Work alongside SLAM & software engineering experts to innovate in applied Spatial AI. Requirements: Strong background in geometric computer vision, state estimation, or SLAM. Expertise in optimisation, numerical linear algebra, & sensor fusion. Industrial experience in deploying SLAM solutions. Proficiency in C++. Desirable experience: PhD in computer vision or robotics. Experience with machine learning techniques for geometric & semantic estimation. GPU programming More ❯
London, England, United Kingdom Hybrid / WFH Options
ZipRecruiter
Engineer your skills will include: Essential PhD in computer vision, robotics, or a related field. Deep knowledge of SLAM, geometric computer vision or state estimation Strong background in optimization, sensorfusion & numerical linear algebra Experience deploying SLAM in industrial or embedded environments Proficient in modern C++ development Familiarity with machine learning for semantic/geometric inference. Experience in … are successfully placed, we offer a great referral scheme! Key words – SLAM/State Estimation/Computer Vision/Robotics/CUDA/Vulkan/OpenCL/Metal/SensorFusion/Embedded Systems/Semantic Inference/Geometric Inference/C++/Spatial AI By applying to this role, you understand that we may collect your personal More ❯