An Iterative Closest Point Method for Lidar Odometry with Fused Semantic Features

Lidar sensors play a pivotal role in a multitude of remote sensing domains, finding extensive applications in various sectors, including robotics, unmanned aerial vehicles (UAVs), autonomous driving, and 3D reconstruction, among others. Their significance and versatility in these areas have made the...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 13; no. 23; p. 12741
Main Authors Cao, Qiku, Liao, Yongjian, Fu, Zhe, Peng, Hongxin, Ding, Ziquan, Huang, Zijie, Huang, Nan, Xiong, Xiaoming, Cai, Shuting
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.12.2023
Subjects
Online AccessGet full text
ISSN2076-3417
2076-3417
DOI10.3390/app132312741

Cover

More Information
Summary:Lidar sensors play a pivotal role in a multitude of remote sensing domains, finding extensive applications in various sectors, including robotics, unmanned aerial vehicles (UAVs), autonomous driving, and 3D reconstruction, among others. Their significance and versatility in these areas have made them indispensable tools for a wide range of applications. The accuracy of Lidar odometry (LO), which serves as the front end of SLAM, is crucial. In this paper, we propose a novel iterative closest point (ICP) technique that combines semantic features to improve LO precision. First, the semantic segmentation neural network is used to extract the semantic features from each frame of the point cloud. Then, the obtained semantic features assist in extracting the local geometric features of the point cloud. Also, the residual blocks of the ICP algorithm’s least squares are combined with semantic confidence functions to better predict the exact pose. Compared to LOAM, there is an average improvement of 4 cm in accuracy per 100 m. Experimental results illustrate the superiority of the proposed method and indicate that the fusion of semantic features can robustly improve the precision of LO.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app132312741