Journal of Graphics ›› 2025, Vol. 46 ›› Issue (3): 542-550.DOI: 10.11996/JG.j.2095-302X.2025030542
• Image Processing and Computer Vision • Previous Articles Next Articles
LIU Xin(), LI Yang, FENG Shengjie, WU Xiaoqun(
)
Received:
2024-07-03
Accepted:
2025-01-06
Online:
2025-06-30
Published:
2025-06-13
Contact:
WU Xiaoqun
About author:
First author contact:LIU Xin (2000-), master student. His main research interests cover computer graphics, digital geometry processing and image processing. E-mail:2230702022@st.btbu.edu.cn
Supported by:
CLC Number:
LIU Xin, LI Yang, FENG Shengjie, WU Xiaoqun. Line extraction and representation algorithm for RGB-D data[J]. Journal of Graphics, 2025, 46(3): 542-550.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025030542
Fig. 1 Algorithm framework of feature line extraction and representation for RGB-D data ((a) Boundary extraction by fusing color and geometric; (b) Feature lines represented by cubic B-spline fitting)
Fig. 3 Boundary extraction visualization results ((a) RGB image; (b) Depth image; (c) Normal image; (d) Planar geometric features; (e) Dense boundary point set)
名称 | 年份 | 分辨率 | 下载地址 |
---|---|---|---|
NYU v2 | 2012 | 640×480 | https://cs.nyu.edu/~silberman/ datasets/nyu_depth_v2.html |
ScanNet | 2017 | 640×480 | https://github.com/ScanNet/ ScanNet |
Table 1 Introduction to the RGB-D public dataset
名称 | 年份 | 分辨率 | 下载地址 |
---|---|---|---|
NYU v2 | 2012 | 640×480 | https://cs.nyu.edu/~silberman/ datasets/nyu_depth_v2.html |
ScanNet | 2017 | 640×480 | https://github.com/ScanNet/ ScanNet |
Fig. 6 Comparison of the results of several feature line extraction algorithms with this paper's algorithm on self-harvested datasets ((a) RGB images; (b) Depth images; (c) AG3line; (d) CannyLines; (e) EDLines; (f) LSD; (g) Ours; (h) Ground truth)
Fig. 7 Comparison of the results of several feature line extraction algorithms with this paper’s algorithm on the NYU v2 dataset ((a) RGB images; (b) Depth images; (c) AG3line; (d) CannyLines; (e) EDLines; (f) LSD; (g) Ours; (h) Ground truth)
Fig. 8 Comparison of several feature line extraction algorithms with this paper’s algorithm on the ScanNet dataset ((a) RGB images; (b) Depth images; (c) AG3line; (d) CannyLines; (e) EDLines; (f) LSD; (g) Ours; (h) Ground truth)
算法名称 | 输入 | 精度 | 召回率 | 交并比 |
---|---|---|---|---|
LSD | 深度图像(2D) | 0.46 | 0.19 | 0.17 |
EDLines | 深度图像(2D) | 0.58 | 0.20 | 0.18 |
AG3line | 深度图像(2D) | 0.51 | 0.21 | 0.18 |
CannyLines | 深度图像(2D) | 0.66 | 0.42 | 0.34 |
本文算法 | RGB-D(2.5D) | 0.82 | 0.59 | 0.54 |
Table 2 Quantitative comparison results of several feature line extraction algorithms with this paper’s algorithm on NYU v2 dataset
算法名称 | 输入 | 精度 | 召回率 | 交并比 |
---|---|---|---|---|
LSD | 深度图像(2D) | 0.46 | 0.19 | 0.17 |
EDLines | 深度图像(2D) | 0.58 | 0.20 | 0.18 |
AG3line | 深度图像(2D) | 0.51 | 0.21 | 0.18 |
CannyLines | 深度图像(2D) | 0.66 | 0.42 | 0.34 |
本文算法 | RGB-D(2.5D) | 0.82 | 0.59 | 0.54 |
算法名称 | 输入 | 分辨率 | 算法用时/ms |
---|---|---|---|
LSD | 深度图像(2D) | 640×480 | 172 |
EDLines | 深度图像(2D) | 640×480 | 99 |
AG3line | 深度图像(2D) | 640×480 | 94 |
CannyLines | 深度图像(2D) | 640×480 | 125 |
本文算法 | RGB-D (2.5D) | 640×480 | 37579 |
Table 3 Comparison results of the time spent by several feature line extraction algorithms with the algorithm in this paper on the NYU v2 dataset
算法名称 | 输入 | 分辨率 | 算法用时/ms |
---|---|---|---|
LSD | 深度图像(2D) | 640×480 | 172 |
EDLines | 深度图像(2D) | 640×480 | 99 |
AG3line | 深度图像(2D) | 640×480 | 94 |
CannyLines | 深度图像(2D) | 640×480 | 125 |
本文算法 | RGB-D (2.5D) | 640×480 | 37579 |
[1] | KULKARNI N, JIN L Y, JOHNSON J, et al. Learning to predict scene-level implicit 3D from posed RGBD data[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 17256-17265. |
[2] | AZINOVIĆ D, MARTIN-BRUALLA R, GOLDMAN D B, et al. Neural RGB-D surface reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 6290-6301. |
[3] |
周婧怡, 张栖桐, 冯结青. 基于混合结构的多视图三维场景重建[J]. 图学学报, 2024, 45(1): 199-208.
DOI |
ZHOU J Y, ZHANG Q T, FENG J Q. Hybrid-structure based multi-view 3D scene reconstruction[J]. Journal of Graphics, 2024, 45(1): 199-208 (in Chinese).
DOI |
|
[4] | XU Z F, ZHAN X Y, XIU Y M, et al. Onboard dynamic-object detection and tracking for autonomous robot navigation with RGB-D camera[J]. IEEE Robotics and Automation Letters, 2023, 9(1): 651-658. |
[5] | 汪丹丹, 张旭东, 范之国, 等. 基于RGB-D的反向融合实例分割算法[J]. 图学学报, 2021, 42(5): 767-774. |
WANG D D, ZHANG X D, FAN Z G, et al. A reverse fusion instance segmentation algorithm based on RGB-D[J]. Journal of Graphics, 2021, 42(5): 767-774 (in Chinese). | |
[6] | ZHAO H J, CHEN J S, WANG L J, et al. ARKitTrack: a new diverse dataset for tracking using mobile RGB-D data[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 5126-5135. |
[7] | JIANG H L, DING L Y, HU J J, et al. PLNet: plane and line priors for unsupervised indoor depth estimation[C]// 2021 International Conference on 3D Vision. New York: IEEE Press, 2021: 741-750. |
[8] |
VON GIOI R G, JAKUBOWICZ J, MOREL J M, et al. LSD: a fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4): 722-732.
DOI PMID |
[9] | AKINLAR C, TOPAL C. EDLines: a real-time line segment detector with a false detection control[J]. Pattern Recognition Letters, 2011, 32(13): 1633-1642. |
[10] | ZHANG Y J, WEI D, LI Y S. AG3line: active grouping and geometry-gradient combined validation for fast line segment extraction[J]. Pattern Recognition, 2021, 113: 107834. |
[11] | LU X H, YAO J, LI K, et al. CannyLines: a parameter-free line segment detector[C]// 2015 IEEE International Conference on Image Processing. New York: IEEE Press, 2015: 507-511. |
[12] | ZHANG Z H, LI Z X, BI N, et al. PPGNet: learning point-pair graph for line segment detection[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 7105-7114. |
[13] | XU Y F, XU W J, CHEUNG D, et al. Line segment detection using transformers without edges[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4257-4266. |
[14] | 张茗奕, 刘希龙, 徐德.图像线段提取方法综述[C]//第36届中国控制会议论文集(G). 北京: 中国自动化学会控制理论专业委员会, 2017: 1287-1293. |
ZHANG M Y, LIU X L, XU D. Survey on line segment detection on images[C]//The 36th Chinese Control Conference (G). Beijing: Technical Committee on Control Theory, Chinese Association of Automation, 2017: 1287-1293 (in Chinese). | |
[15] | FU Q, WANG J L, YU H S, et al. PL-VINS: real-time monocular visual-inertial SLAM with point and line features[EB/OL]. [2025-01-05]https://arxiv.org/abs/2009.07462. |
[16] | CHO N G, YUILLE A, LEE S W. A novel Linelet-based representation for line segment detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(5): 1195-1208. |
[17] | CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, PAMI-8(6): 679-698. |
[18] | LI D, LIU S, XIANG W L, et al. A SLAM system based on RGBD image and point-line feature[J]. IEEE Access, 2021, 9: 9012-9025. |
[19] | BOSE L, RICHARDS A. Fast depth edge detection and edge based RGB-D SLAM[C]// 2016 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2016: 1323-1330. |
[20] | YANG B J, CHEN E Q, YANG S Y, et al. RGB-D geometric features extraction and edge-based scene-SIRFS[C]// 2015 IEEE International Conference on Communication Software and Networks. New York: IEEE Press, 2015: 306-311. |
[21] | CAO Y P, JU T, XU J, et al. Extracting sharp features from RGB‐D images[J]. Computer Graphics Forum, 2017, 36(8): 138-152. |
[22] | CHOI C, TREVOR A J B, CHRISTENSEN H I. RGB-D edge detection and edge-based registration[C]// 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2013: 1568-1575. |
[23] | LU X H, LIU Y H, LI K. Fast 3D line segment detection from unorganized point cloud[EB/OL]. [2025-01-05]https://arxiv.org/abs/1901.02532. |
[24] | HU Z T, CHEN C, YANG B S, et al. Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization[J]. International Journal of Applied Earth Observation and Geoinformation, 2022, 112: 102858. |
[25] | XUE N, BAI S, WANG F D, et al. Learning attraction field representation for robust line segment detection[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 1595-1603. |
[26] | RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]// The 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241. |
[27] | CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// The 15th European Conference on Computer Vision. Cham: Springer, 2018: 801-818. |
[28] | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 770-778. |
[29] | FENG C, TAGUCHI Y, KAMAT V R. Fast plane extraction in organized point clouds using agglomerative hierarchical clustering[C]// 2014 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2014: 6218-6225. |
[30] | HARRIS C, STEPHENS M. A combined corner and edge detector[EB/OL]. [2024-05-03]https://www.bmva.org/bmvc/1988/avc-88-023.html. |
[1] | HU Wen-kai , MA Hong-yu , LIU Ya-zui , WEI Xiao-dong , ZHAO Gang , SHEN Li-yong , LI Xin. T-splines a new representation for CAD, CAE and CAM [J]. Journal of Graphics, 2022, 43(6): 1018-1033. |
[2] | HUO Yan-wen, CAI Zhan-chuan . Geometric Iteration Method Based on Many-Knot Spline Polishing Functions [J]. Journal of Graphics, 2019, 40(1): 15-23. |
[3] | Han Liwen, Yang Yuting, Qiu Zhiyu. A Subdivision Algorithm for Non-uniform B-Splines of Arbitrary Degree [J]. Journal of Graphics, 2013, 34(5): 56-61. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||