Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data
- Aakash Kumar University of Central Florida
- Chen Chen University of Central Florida
- Ajmal Mian University of Western Australia
- Neils Lobo University of Central Florida
- Mubarak Shah Univerity of Central Florida
Abstract
3D detection is a critical task that enables machines to identify and locate objects in three-dimensional space. It has a broad range of applications in several fields, including autonomous driving, robotics and augmented reality. Monocular 3D detection is attractive as it requires only a single camera, however, it lacks the accuracy and robustness required for real world applications. High resolution LiDAR on the other hand, can be expensive and lead to interference problems in heavy traffic given their active transmissions. We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection. Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor. Specifically, we use only 512 points, which is just 1% of a full LiDAR frame in the KITTI dataset. Our method reconstructs a complete 3D point cloud from this limited 3D information combined with a single image. The reconstructed 3D point cloud and corresponding image can be used by any multi-modal off-the-shelf detector for 3D object detection. By using the proposed network architecture with an off-the-shelf multi-modal 3D detector, the accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods and 6% to 9% compare to the baseline multi-modal methods on KITTI and JackRabbot datasets.
Video
Proposed Approach
Overview of the proposed 3D object detection approach. The architecture accepts an input image and a set of sparse LiDAR points, which it then processes to generate a high-resolution point cloud. This dense point cloud, once reconstructed, is paired with the original image and fed into an off-the-shelf 3D object detector, enabling the accurate detection and localization of 3D objects within the scene.
Network Architecture
Proposed architecture for generating a dense point cloud from an input image and a sparse set of 3D points from a low-cost sensor. Initially, the image is split into 465 patches which are transformed into 256-dimensional vectors by a CNN based feature extractor. These feature vectors combined with sampled 3D points (Point Queries) are passed through a transformer encoder-decoder framework. The encoder uses self-attention to understand patch details, while the decoder employs cross-attention with image-tokens to produce point-tokens for each query point. These tokens are processed by a Point Cloud (PC) Generator that translates them into a dense point cloud. Training involves a Chamfer distance loss function, comparing the predicted point groups with ground-truth data, derived from nearest neighbors to the query points. The outcome is a detailed point cloud useful for 3D object detection and other applications.
Qualitative Results
Ground truth point cloud (LiDAR) compared to point cloud predictions generated using $512$ query points. Each query point generates 32 points. We show the query points with increased point size for better visibility.
Qualitivate Results of 3D Detections
Quantitative Results
Results of monocular 3D detection methods on the KITTI leader board. Our method stands by combining minimal depth information with images to perform 3D detection through point cloud reconstruction. By adding just 512 extra points, our method achieves a significant improvement in performance compared to the monocular and baseline multimodal methods