Journal of Graphics ›› 2023, Vol. 44 ›› Issue (4): 747-754.DOI: 10.11996/JG.j.2095-302X.2023040747
• Image Processing and Computer Vision • Previous Articles Next Articles
LI Xin-li(), MAO Hao, WANG Wu, YANG Guo-tian
Received:
2023-02-03
Accepted:
2023-03-24
Online:
2023-08-31
Published:
2023-08-16
About author:
First author contact:LI Xin-li (1973-), associate professor, Ph.D. Her main research interests cover pattern recognition, intelligent system and digital image processing. E-mail:lixinli@ncepu.edu.cn
CLC Number:
LI Xin-li, MAO Hao, WANG Wu, YANG Guo-tian. Research on real-time dense reconstruction for open road scene[J]. Journal of Graphics, 2023, 44(4): 747-754.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2023040747
场景 | 本文算法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.045 m / 0.521° | 0.052 m / 0.821° | 0.076 m / 0.628° |
场景2 | 0.057 m / 0.415° | 0.048 m / 0.565° | 0.064 m / 0.792° |
场景3 | 0.029 m / 0.404° | 0.051 m / 0.223° | 0.036 m / 0.636° |
场景4 | 0.038 m / 0.328° | 0.047 m / 0.652° | 0.030 m / 0.575° |
场景5 | 0.012 m / 0.247° | 0.036 m / 0.892° | 0.027 m / 0.410° |
Table 1 Mean error of different calibration methods
场景 | 本文算法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.045 m / 0.521° | 0.052 m / 0.821° | 0.076 m / 0.628° |
场景2 | 0.057 m / 0.415° | 0.048 m / 0.565° | 0.064 m / 0.792° |
场景3 | 0.029 m / 0.404° | 0.051 m / 0.223° | 0.036 m / 0.636° |
场景4 | 0.038 m / 0.328° | 0.047 m / 0.652° | 0.030 m / 0.575° |
场景5 | 0.012 m / 0.247° | 0.036 m / 0.892° | 0.027 m / 0.410° |
场景 | 本文方法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.023 | 0.036 | 0.032 |
场景2 | 0.034 | 0.042 | 0.037 |
场景3 | 0.014 | 0.021 | 0.019 |
场景4 | 0.023 | 0.026 | 0.029 |
场景5 | 0.032 | 0.034 | 0.036 |
Table 2 Mean convergence time of different calibration methods (s)
场景 | 本文方法 | 算法1 | 算法2 |
---|---|---|---|
场景1 | 0.023 | 0.036 | 0.032 |
场景2 | 0.034 | 0.042 | 0.037 |
场景3 | 0.014 | 0.021 | 0.019 |
场景4 | 0.023 | 0.026 | 0.029 |
场景5 | 0.032 | 0.034 | 0.036 |
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 40.101 | 35.237 | 37.878 |
场景2 | 37.920 | 31.818 | 33.532 |
场景3 | 42.335 | 36.190 | 37.977 |
Table 3 PSNR value of different reconstruction methods
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 40.101 | 35.237 | 37.878 |
场景2 | 37.920 | 31.818 | 33.532 |
场景3 | 42.335 | 36.190 | 37.977 |
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 1.02 | 5.87 | 9.92 |
场景2 | 0.93 | 6.91 | 13.26 |
场景3 | 0.99 | 6.84 | 14.73 |
Table 4 execution time per frame of different reconstruction methods (s)
场景 | 本文算法 | OpenMVS | Colmap |
---|---|---|---|
场景1 | 1.02 | 5.87 | 9.92 |
场景2 | 0.93 | 6.91 | 13.26 |
场景3 | 0.99 | 6.84 | 14.73 |
[1] | ZHANG J, SINGH S. LOAM: lidar odometry and mapping in real-time[EB/OL]. [2022-09-10]. https://www.ri.cmu.edu/pub_files/2014/7/Ji_LidarMapping_RSS2014_v8.pdf. |
[2] | SHAN T X, ENGLOT B. LeGO-LOAM: lightweight and rgound-optimized lidar odometry and mapping on variable terrain[C]// 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2018: 4758-4765. |
[3] | SHAN T X, ENGLOT B, MEYERS D, et al. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping[C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2020: 5135-5142. |
[4] |
FURUKAWA Y, PONCE J. Accurate, dense, and robust multiview stereopsis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(8): 1362-1376.
DOI URL |
[5] | WU T P, YEUNG S K, JIA J Y, et al. Quasi-dense 3D reconstruction using tensor-based multiview stereo[C]// 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2010: 1482-1489. |
[6] | BLEYER M, RHEMANN C, ROTHER C. PatchMatch stereo - stereo matching with slanted support Windows[EB/OL]. [2022-09-10]. http://users.utcluj.ro/-robert/ip/proiect/08_PatchMatchStereo_BMVC2011_6MB.pdf. |
[7] | JI M, GALL J, ZHENG H T, et al. SurfaceNet: an end-to-end 3D neural network for multiview stereopsis[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 2326-2334. |
[8] | YAO Y, LUO Z X, LI S W, et al. MVSNet: depth inference for unstructured multi-view stereo[EB/OL]. [2022-09-10]. https://openaccess.thecvf.com/content_ECCV_2018/html/Yao_Yao_MVSNet_Depth_Inference_ECCV_2018_paper.html. |
[9] | WANG F J H, GALLIANI S, VOGEL C, et al. PatchmatchNet: learned multi-view patchmatch stereo[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14189-14198. |
[10] | 王江安, 庞大为, 黄乐, 等. 基于多尺度特征递归卷积的稠密点云重建网络[J]. 图学学报, 2022, 43(5): 875-883. |
WANG J A, PANG D W, HUANG L, et al. Dense point cloud reconstruction network using multi-scale feature recursive convolution[J]. Journal of Graphics, 2022, 43(5): 875-883 (in Chinese). | |
[11] | 朱攀, 史健勇. 基于AISI网络的BIM三维重建方法研究[J]. 图学学报, 2020, 41(5): 839-846. |
ZHU P, SHI J Y. Research on 3D reconstruction method of BIM based on ASIS network[J]. Journal of Graphics, 2020, 41(5): 839-846 (in Chinese). | |
[12] |
SHAUKAT A, BLACKER P C, SPITERI C, et al. Towards Camera-LIDAR fusion-based terrain modelling for planetary surfaces: review and analysis[J]. Sensors, 2016, 16(11): 1952-1975.
DOI URL |
[13] |
PANDEY G, MCBRIDE J R, SAVARESE S, et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information[J]. Journal of Field Robotics, 2015, 32(5): 696-722.
DOI URL |
[14] | LEVINSON J, THRUN S. Automatic online calibration of cameras and lasers[J]. Robotics: Science and Systems, 2013, 2(7): 1-10. |
[15] | WANG W M, NOBUHARA S, NAKAMURA R, et al. SOIC: semantic online initialization and calibration for LiDAR and camera[EB/OL]. [2022-09-10]. https://arxiv.53yu.com/pdf/2003.04260.pdf. |
[16] | ZHU Y W, ZHENG C R, YUAN C J, et al. CamVox: a low-cost and accurate lidar-assisted visual SLAM system[C]// 2021 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2021: 5049-5055. |
[17] | 贾晓辉, 晁晓辉, 刘今越. 固态激光雷达与相机间外参标定方法研究[J]. 激光杂志, 2022, 43(8): 30-36. |
JIA X H, CHAO X H, LIU J Y. Research on extrinsic parameter calibration method between solid-state LiDAR-camera system[J]. Laser Journal, 2022, 43(8): 30-36 (in Chinese). | |
[18] | CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentatio-n[EB/OL]. [2022-09-10]. https://openaccess.thecvf.com/content_ECCV_2018/html/Liang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.html. |
[19] |
XU W, ZHANG F. FAST-LIO: a fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3317-3324.
DOI URL |
[20] | LIU X Y, YUAN C J, ZHANG F. Targetless extrinsic calibration of multiple small FoV LiDARs and cameras using adaptive voxelization[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12. |
[21] | YUAN C J, LIU X Y, HONG X P, et al. Pixel-level extrinsic self calibration of high resolution LiDAR and camera in targetless environments[EB/OL]. [2022-09-10]. https://arxiv.org/abs/2103.01627. |
[22] | GUINDEL C, BELTRAN J, MARTIN D, et al. Automatic extrinsic calibration for LiDAR-stereo vehicle sensor setups[C]// IEEE International Conference on Intelligent Transportation Systems. New York: IEEE Press, 2017: 1-6. |
[23] |
LI S H, XIAO X W, GUO B X, et al. A novel OpenMVS-based texture reconstruction method based on the fully automatic plane segmentation for 3D mesh models[J]. Remote Sensing, 2020, 12(23): 3908-3915.
DOI URL |
[24] | SCHONBERGER J L, FRAHM J M. Structure-from-motion revisited[C]// IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 4104-4113. |
[1] | YAN Shan-wu, XIAO Hong-bing, WANG Yu, SUN Mei. Video anomaly detection combining pedestrian spatiotemporal information [J]. Journal of Graphics, 2023, 44(1): 95-103. |
[2] | GU Yu, ZHAO Jun. Research on image detection algorithm of freight train brake shoe bolt and brake shoe fault [J]. Journal of Graphics, 2023, 44(1): 88-94. |
[3] | ZHANG Chen-yang, CAO Yan-hua, YANG Xiao-zhong. Multi-focus image fusion method based on fractional wavelet combined with guided filtering [J]. Journal of Graphics, 2023, 44(1): 77-87. |
[4] | SHAO Ying-jie, YIN Hui, XIE Ying, HUANG Hua. A sketch-guided facial image completion network via selective recurrent inference [J]. Journal of Graphics, 2023, 44(1): 67-76. |
[5] | PAN Sen-lei, QIAN Wen-hua, CAO Jin-de, XU Dan. Learning attention for Dongba paintings emotion classification [J]. Journal of Graphics, 2023, 44(1): 59-66. |
[6] | SHAN Fang-mei, WANG Meng-wen, LI Min. Multi-scale convolutional neural network incorporating attention mechanism for intestinal polyp segmentation [J]. Journal of Graphics, 2023, 44(1): 50-58. |
[7] | ZHANG Qian, WANG Xia-li, WANG Wei-hao, WU Li-zhan, LI Chao. Cell counting method based on multi-scale feature fusion [J]. Journal of Graphics, 2023, 44(1): 41-49. |
[8] | SHAO Wen-bin, LIU Yu-jie, SUN Xiao-rui, LI Zong-min. Cross modality person re-identification based on residual enhanced attention [J]. Journal of Graphics, 2023, 44(1): 33-40. |
[9] | PI Jun, LIU Yu-heng, LI Jiu-hao. Research on lightweight forest fire detection algorithm based on YOLOv5s [J]. Journal of Graphics, 2023, 44(1): 26-32. |
[10] | PEI Hui-ning, SHAO Xing-chen, TAN Zhao-yun, HUANG Xue-qin, BAI Zhong-hang. Prediction model of cultural image based on DE-GWO and SVR [J]. Journal of Graphics, 2023, 44(1): 184-193. |
[11] | FAN Zhen, LIU Xiao-jing, LI Xiao-bo, CUI Ya-chao. A homography estimation method robust to illumination and occlusion [J]. Journal of Graphics, 2023, 44(1): 166-176. |
[12] | LI Xiao-bo, LI Yang-gui, GUO Ning, FAN Zhen. Mask detection algorithm based on YOLOv5 integrating attention mechanism [J]. Journal of Graphics, 2023, 44(1): 16-25. |
[13] | LIU Zhen-ye, CHEN Ren-jie, LIU Li-gang. Edge length based 3D shape interpolation [J]. Journal of Graphics, 2023, 44(1): 158-165. |
[14] | WANG Jia-dong, CAO Juan, CHEN Zhong-gui. Feature-preserving skeleton extraction algorithm for point clouds [J]. Journal of Graphics, 2023, 44(1): 146-157. |
[15] | WANG Yu-ping, ZENG Yi, LI Sheng-hui, ZHANG Lei. A Transformer-based 3D human pose estimation method [J]. Journal of Graphics, 2023, 44(1): 139-145. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||