The Sensor Fusion Engineer Nanodegree program will teach you the skills that most engineers learn on-the-job or in a graduate program - how to fuse data from multiple sensors to track non-linear motion and objects in the environment. Apply the skills you learn in this program to a career in robotics, self-driving cars, and much more. Learn to fuse lidar point clouds, radar signatures, and camera images using Kalman Filters to perceive the environment and detect and track vehicles and pedestrians over time.
You should have intermediate C++ knowledge, and be familiar with calculus, probability, and linear algebra. See detailed requirements.
Process raw lidar data with filtering, segmentation, and clustering to detect other vehicles on the road.
Lidar Obstacle Detection
Analyze radar signatures to detect and track objects. Calculate velocity and orientation by correcting for radial velocity distortions, noise, and occlusions.
Radar Obstacle Detection
Fuse camera images together with lidar point cloud data. You'll extract object features, classify objects, and project the camera image into three dimensions to fuse with lidar data.
Camera and Lidar Fusion
Fuse data from multiple sources using Kalman filters, and build extended and unscented Kalman filters for tracking nonlinear movement.
Unscented Kalman Filters
David Silver, Stephen Welch, Andreas Haja, Abdullah Zaidi and Aaron Brown