3D perception is an essential task for autonomous driving, and thus building the most accurate, computationally efficient, fast, and label efficient models is of great interest. In particular, label-efficient 3D detection is attractive as manual labeling of 3D LiDAR point clouds is both costly and time-consuming. Autolabeling is a machine learning paradigm in which a model is trained on a (small) set of labeled data before being used to generate predictions, known as pseudo-labels, on a large set of unlabeled data which can then be used to train an accurate downstream model with only a fraction of the labeling cost. In this project, we seek to explore autolabeling from two perspectives: in the first, we consider how training in an offline setting can improve the autolabeling model through the leveraging of multiple frames of point cloud data. In the second, we consider the question of, given a set of manual labels and pseudo-labels, what is the most principled way to train a downstream model using the potentially inaccurate pseudo-labels.
Project currently funded by: Federal