Within the past few decades, the goal of fully-autonomous vehicles has moved from a thought experiment to a potential reality thanks to advances in machine intelligence. One of the key challenges to still be overcome is the building of robotic perception systems which can achieve performance on-par with or surpassing that of humans. Currently, most autonomous driving researchers rely on several different modalities for collecting visual information, namely lidar, radar, and cameras. Although relying on lidar for perception has the drawback of high cost, maturing lidar technology has opened the door to cost-effective mass-produced lidars in the future. With this in mind, we seek to develop perception algorithms which can successfully fuse information from both camera and lidar data to achieve performance surpassing that of a single data modality.
Project ended: 10/1/2022