图学学报 ›› 2023, Vol. 44 ›› Issue (4): 747-754.DOI: 10.11996/JG.j.2095-302X.2023040747
收稿日期:
2023-02-03
接受日期:
2023-03-24
出版日期:
2023-08-31
发布日期:
2023-08-16
作者简介:
第一联系人:李新利(1973-),女,副教授,博士。主要研究方向为模式识别、智能系统与图像处理。E-mail:lixinli@ncepu.edu.cn
LI Xin-li(), MAO Hao, WANG Wu, YANG Guo-tian
Received:
2023-02-03
Accepted:
2023-03-24
Online:
2023-08-31
Published:
2023-08-16
About author:
First author contact:LI Xin-li (1973-), associate professor, Ph.D. Her main research interests cover pattern recognition, intelligent system and digital image processing. E-mail:lixinli@ncepu.edu.cn
摘要:
针对智能驾驶领域建图效率低、建图精度差的问题,提出一种基于多传感器融合且用于室外开放道路场景的两阶段稠密建图算法。算法由外参实时标定模块和建图模块组成,外参实时标定模块基于道路场景中典型语义和几何特征构建约束并进行优化,实现对传感器间外参的实时在线标定;建图模块的核心在于一种两阶段增量式建图算法,根据智能驾驶中对不同区域的建图精度要求,先后分别对整个场景以及路面区域进行逐帧增量式粗糙建图和抽帧精细建图,粗糙建图保证了算法的实时性,精细建图实现了对交通标志等路面表面纹理的精确还原。在室外开放道路场景下进行实验,实验结果表明,该算法能在室外大尺度场景下进行实时稠密建图,且建图精度和效率较高。
中图分类号:
李新利, 毛昊, 王武, 杨国田. 面向开放道路场景的实时稠密建图研究[J]. 图学学报, 2023, 44(4): 747-754.
LI Xin-li, MAO Hao, WANG Wu, YANG Guo-tian. Research on real-time dense reconstruction for open road scene[J]. Journal of Graphics, 2023, 44(4): 747-754.
场景 | 本文算法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.045 m / 0.521° | 0.052 m / 0.821° | 0.076 m / 0.628° |
场景2 | 0.057 m / 0.415° | 0.048 m / 0.565° | 0.064 m / 0.792° |
场景3 | 0.029 m / 0.404° | 0.051 m / 0.223° | 0.036 m / 0.636° |
场景4 | 0.038 m / 0.328° | 0.047 m / 0.652° | 0.030 m / 0.575° |
场景5 | 0.012 m / 0.247° | 0.036 m / 0.892° | 0.027 m / 0.410° |
表1 不同标定算法的平均误差值
Table 1 Mean error of different calibration methods
场景 | 本文算法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.045 m / 0.521° | 0.052 m / 0.821° | 0.076 m / 0.628° |
场景2 | 0.057 m / 0.415° | 0.048 m / 0.565° | 0.064 m / 0.792° |
场景3 | 0.029 m / 0.404° | 0.051 m / 0.223° | 0.036 m / 0.636° |
场景4 | 0.038 m / 0.328° | 0.047 m / 0.652° | 0.030 m / 0.575° |
场景5 | 0.012 m / 0.247° | 0.036 m / 0.892° | 0.027 m / 0.410° |
场景 | 本文方法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.023 | 0.036 | 0.032 |
场景2 | 0.034 | 0.042 | 0.037 |
场景3 | 0.014 | 0.021 | 0.019 |
场景4 | 0.023 | 0.026 | 0.029 |
场景5 | 0.032 | 0.034 | 0.036 |
表2 不同标定算法的平均收敛时间(s)
Table 2 Mean convergence time of different calibration methods (s)
场景 | 本文方法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.023 | 0.036 | 0.032 |
场景2 | 0.034 | 0.042 | 0.037 |
场景3 | 0.014 | 0.021 | 0.019 |
场景4 | 0.023 | 0.026 | 0.029 |
场景5 | 0.032 | 0.034 | 0.036 |
图8 稠密重建结果((a)整个地图;(b)具体场景1;(c)具体场景2;(d)具体场景3)
Fig. 8 Result of dense reconstrution ((a) Whole map; (b) Specific scene 1; (c) Specific scene 2; (d) Specific scene 3)
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 40.101 | 35.237 | 37.878 |
场景2 | 37.920 | 31.818 | 33.532 |
场景3 | 42.335 | 36.190 | 37.977 |
表3 不同建图算法2次建图的PSNR值
Table 3 PSNR value of different reconstruction methods
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 40.101 | 35.237 | 37.878 |
场景2 | 37.920 | 31.818 | 33.532 |
场景3 | 42.335 | 36.190 | 37.977 |
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 1.02 | 5.87 | 9.92 |
场景2 | 0.93 | 6.91 | 13.26 |
场景3 | 0.99 | 6.84 | 14.73 |
表4 各算法平均每帧建图时长(s)
Table 4 execution time per frame of different reconstruction methods (s)
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 1.02 | 5.87 | 9.92 |
场景2 | 0.93 | 6.91 | 13.26 |
场景3 | 0.99 | 6.84 | 14.73 |
[1] | ZHANG J, SINGH S. LOAM: lidar odometry and mapping in real-time[EB/OL]. [2022-09-10]. https://www.ri.cmu.edu/pub_files/2014/7/Ji_LidarMapping_RSS2014_v8.pdf. |
[2] | SHAN T X, ENGLOT B. LeGO-LOAM: lightweight and rgound-optimized lidar odometry and mapping on variable terrain[C]// 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2018: 4758-4765. |
[3] | SHAN T X, ENGLOT B, MEYERS D, et al. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping[C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2020: 5135-5142. |
[4] |
FURUKAWA Y, PONCE J. Accurate, dense, and robust multiview stereopsis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(8): 1362-1376.
DOI URL |
[5] | WU T P, YEUNG S K, JIA J Y, et al. Quasi-dense 3D reconstruction using tensor-based multiview stereo[C]// 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2010: 1482-1489. |
[6] | BLEYER M, RHEMANN C, ROTHER C. PatchMatch stereo - stereo matching with slanted support Windows[EB/OL]. [2022-09-10]. http://users.utcluj.ro/-robert/ip/proiect/08_PatchMatchStereo_BMVC2011_6MB.pdf. |
[7] | JI M, GALL J, ZHENG H T, et al. SurfaceNet: an end-to-end 3D neural network for multiview stereopsis[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 2326-2334. |
[8] | YAO Y, LUO Z X, LI S W, et al. MVSNet: depth inference for unstructured multi-view stereo[EB/OL]. [2022-09-10]. https://openaccess.thecvf.com/content_ECCV_2018/html/Yao_Yao_MVSNet_Depth_Inference_ECCV_2018_paper.html. |
[9] | WANG F J H, GALLIANI S, VOGEL C, et al. PatchmatchNet: learned multi-view patchmatch stereo[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14189-14198. |
[10] | 王江安, 庞大为, 黄乐, 等. 基于多尺度特征递归卷积的稠密点云重建网络[J]. 图学学报, 2022, 43(5): 875-883. |
WANG J A, PANG D W, HUANG L, et al. Dense point cloud reconstruction network using multi-scale feature recursive convolution[J]. Journal of Graphics, 2022, 43(5): 875-883 (in Chinese). | |
[11] | 朱攀, 史健勇. 基于AISI网络的BIM三维重建方法研究[J]. 图学学报, 2020, 41(5): 839-846. |
ZHU P, SHI J Y. Research on 3D reconstruction method of BIM based on ASIS network[J]. Journal of Graphics, 2020, 41(5): 839-846 (in Chinese). | |
[12] |
SHAUKAT A, BLACKER P C, SPITERI C, et al. Towards Camera-LIDAR fusion-based terrain modelling for planetary surfaces: review and analysis[J]. Sensors, 2016, 16(11): 1952-1975.
DOI URL |
[13] |
PANDEY G, MCBRIDE J R, SAVARESE S, et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information[J]. Journal of Field Robotics, 2015, 32(5): 696-722.
DOI URL |
[14] | LEVINSON J, THRUN S. Automatic online calibration of cameras and lasers[J]. Robotics: Science and Systems, 2013, 2(7): 1-10. |
[15] | WANG W M, NOBUHARA S, NAKAMURA R, et al. SOIC: semantic online initialization and calibration for LiDAR and camera[EB/OL]. [2022-09-10]. https://arxiv.53yu.com/pdf/2003.04260.pdf. |
[16] | ZHU Y W, ZHENG C R, YUAN C J, et al. CamVox: a low-cost and accurate lidar-assisted visual SLAM system[C]// 2021 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2021: 5049-5055. |
[17] | 贾晓辉, 晁晓辉, 刘今越. 固态激光雷达与相机间外参标定方法研究[J]. 激光杂志, 2022, 43(8): 30-36. |
JIA X H, CHAO X H, LIU J Y. Research on extrinsic parameter calibration method between solid-state LiDAR-camera system[J]. Laser Journal, 2022, 43(8): 30-36 (in Chinese). | |
[18] | CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentatio-n[EB/OL]. [2022-09-10]. https://openaccess.thecvf.com/content_ECCV_2018/html/Liang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.html. |
[19] |
XU W, ZHANG F. FAST-LIO: a fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3317-3324.
DOI URL |
[20] | LIU X Y, YUAN C J, ZHANG F. Targetless extrinsic calibration of multiple small FoV LiDARs and cameras using adaptive voxelization[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12. |
[21] | YUAN C J, LIU X Y, HONG X P, et al. Pixel-level extrinsic self calibration of high resolution LiDAR and camera in targetless environments[EB/OL]. [2022-09-10]. https://arxiv.org/abs/2103.01627. |
[22] | GUINDEL C, BELTRAN J, MARTIN D, et al. Automatic extrinsic calibration for LiDAR-stereo vehicle sensor setups[C]// IEEE International Conference on Intelligent Transportation Systems. New York: IEEE Press, 2017: 1-6. |
[23] |
LI S H, XIAO X W, GUO B X, et al. A novel OpenMVS-based texture reconstruction method based on the fully automatic plane segmentation for 3D mesh models[J]. Remote Sensing, 2020, 12(23): 3908-3915.
DOI URL |
[24] | SCHONBERGER J L, FRAHM J M. Structure-from-motion revisited[C]// IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 4104-4113. |
[1] | 闫善武, 肖洪兵, 王瑜, 孙梅. 融合行人时空信息的视频异常检测[J]. 图学学报, 2023, 44(1): 95-103. |
[2] | 谷雨, 赵军. 列车闸瓦钎及闸瓦故障图像检测算法研究[J]. 图学学报, 2023, 44(1): 88-94. |
[3] | 张晨阳, 曹艳华, 杨晓忠. 基于分数阶小波与引导滤波的多聚焦图像融合方法[J]. 图学学报, 2023, 44(1): 77-87. |
[4] | 邵英杰, 尹辉, 谢颖, 黄华. 草图引导的选择循环推理式人脸图像修复网络[J]. 图学学报, 2023, 44(1): 67-76. |
[5] | 潘森垒, 钱文华, 曹进德, 徐丹. 基于注意力机制的东巴画情感分类[J]. 图学学报, 2023, 44(1): 59-66. |
[6] | 单芳湄, 王梦文, 李敏. 融合注意力机制的肠道息肉分割多尺度卷积神经网络[J]. 图学学报, 2023, 44(1): 50-58. |
[7] | 张倩, 王夏黎, 王炜昊, 武历展, 李超. 基于多尺度特征融合的细胞计数方法[J]. 图学学报, 2023, 44(1): 41-49. |
[8] | 邵文斌, 刘玉杰, 孙晓瑞, 李宗民. 基于残差增强注意力的跨模态行人重识别[J]. 图学学报, 2023, 44(1): 33-40. |
[9] | 皮骏, 刘宇恒, 李久昊. 基于YOLOv5s的轻量化森林火灾检测算法研究[J]. 图学学报, 2023, 44(1): 26-32. |
[10] | 裴卉宁, 邵星辰, 谭昭芸, 黄雪芹, 白仲航. 融合DE-GWO与SVR的文化意象预测模型[J]. 图学学报, 2023, 44(1): 184-193. |
[11] | 范震, 刘晓静, 李小波, 崔亚超. 一种对光照和遮挡鲁棒的单应性估计方法[J]. 图学学报, 2023, 44(1): 166-176. |
[12] | 李小波, 李阳贵, 郭宁, 范震. 融合注意力机制的YOLOv5口罩检测算法[J]. 图学学报, 2023, 44(1): 16-25. |
[13] | 刘振晔, 陈仁杰, 刘利刚. 基于边长的三维形状插值[J]. 图学学报, 2023, 44(1): 158-165. |
[14] | 王佳栋, 曹娟, 陈中贵. 保特征的点云骨架提取算法[J]. 图学学报, 2023, 44(1): 146-157. |
[15] | 王玉萍, 曾毅, 李胜辉, 张磊. 一种基于Transformer的三维人体姿态估计方法[J]. 图学学报, 2023, 44(1): 139-145. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||