Journal of Graphics ›› 2025, Vol. 46 ›› Issue (1): 150-158.DOI: 10.11996/JG.j.2095-302X.2025010150
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
WU Yiqi1(), HE Jiale1, ZHANG Tiantian1, ZHANG Dejun1, LI Yanli1,2, CHEN Yilin3(
)
Received:
2024-07-09
Accepted:
2024-09-22
Online:
2025-02-28
Published:
2025-02-14
Contact:
CHEN Yilin
About author:
First author contact:WU Yiqi (1985-), associate professor, Ph.D. His main research interests cover graphic and image processing. E-mail:wuyq@cug.edu.cn
Supported by:
CLC Number:
WU Yiqi, HE Jiale, ZHANG Tiantian, ZHANG Dejun, LI Yanli, CHEN Yilin. Unsupervised 3D point cloud non-rigid registration based on multi-feature extraction and point correspondence[J]. Journal of Graphics, 2025, 46(1): 150-158.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025010150
方法 | 容错率 | CD | ||
---|---|---|---|---|
0% | 10% | 20% | ||
CPD-Net[ | 0.33 | 6.82 | 24.90 | 0.004 8 |
FlowNet3D[ | 1.21 | 19.76 | 41.35 | 0.004 6 |
CorrNet3D[ | 2.05 | 25.68 | 48.86 | 0.002 6 |
NrtNet[ | 2.69 | 30.04 | 51.88 | - |
HCDNet3D[ | 2.53 | 31.89 | 54.27 | - |
本文方法 | 7.44 | 38.12 | 54.89 | 0.001 3 |
Table 1 Point correspondence accuracy/% and CD results under different tolerance rates
方法 | 容错率 | CD | ||
---|---|---|---|---|
0% | 10% | 20% | ||
CPD-Net[ | 0.33 | 6.82 | 24.90 | 0.004 8 |
FlowNet3D[ | 1.21 | 19.76 | 41.35 | 0.004 6 |
CorrNet3D[ | 2.05 | 25.68 | 48.86 | 0.002 6 |
NrtNet[ | 2.69 | 30.04 | 51.88 | - |
HCDNet3D[ | 2.53 | 31.89 | 54.27 | - |
本文方法 | 7.44 | 38.12 | 54.89 | 0.001 3 |
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 0.001 4 | 0.003 0 | 0.004 2 | 0.004 7 |
FlowNet3D[ | 0.001 6 | 0.003 4 | 0.004 3 | 0.005 2 |
CorrNet3D[ | 0.001 5 | 0.002 9 | 0.004 1 | 0.005 1 |
文献[26] | 0.001 4 | 0.001 8 | 0.001 8 | 0.002 7 |
本文方法 | 0.001 4 | 0.001 5 | 0.001 5 | 0.001 6 |
Table 2 Comparison of chamfer distance for non-rigid registration methods under different deformation rates
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 0.001 4 | 0.003 0 | 0.004 2 | 0.004 7 |
FlowNet3D[ | 0.001 6 | 0.003 4 | 0.004 3 | 0.005 2 |
CorrNet3D[ | 0.001 5 | 0.002 9 | 0.004 1 | 0.005 1 |
文献[26] | 0.001 4 | 0.001 8 | 0.001 8 | 0.002 7 |
本文方法 | 0.001 4 | 0.001 5 | 0.001 5 | 0.001 6 |
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 41.12 | 33.26 | 28.34 | 25.65 |
FlowNet3D[ | 94.78 | 86.51 | 78.57 | 76.43 |
CorrNet3D[ | 97.67 | 96.60 | 94.94 | 94.07 |
本文方法 | 99.54 | 99.40 | 98.84 | 98.43 |
Table 3 Comparison of point correspondence accuracy under 20% tolerance rate at different deformation levels/%
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 41.12 | 33.26 | 28.34 | 25.65 |
FlowNet3D[ | 94.78 | 86.51 | 78.57 | 76.43 |
CorrNet3D[ | 97.67 | 96.60 | 94.94 | 94.07 |
本文方法 | 99.54 | 99.40 | 98.84 | 98.43 |
Fig. 6 Registration visualization results under different deformation rates ((a) 0.3 deformation rate; (b) 0.5 deformation rate; (c) 0.7 deformation rate; (d) 0.8 deformation rate)
方法 | 多重特征提取 | 匹配精细化 | 形状感知注意力 |
---|---|---|---|
方法A | √ | √ | |
方法B | √ | √ | |
方法C | √ | ||
方法D | √ | ||
方法E | √ | √ | √ |
Table 4 Ablation experiment setup
方法 | 多重特征提取 | 匹配精细化 | 形状感知注意力 |
---|---|---|---|
方法A | √ | √ | |
方法B | √ | √ | |
方法C | √ | ||
方法D | √ | ||
方法E | √ | √ | √ |
方法 | 容错率 | ||
---|---|---|---|
0% | 10% | 20% | |
方法A | 19.77 | 82.33 | 93.47 |
方法B | 18.07 | 78.75 | 90.58 |
方法C | 18.29 | 74.10 | 87.07 |
方法D | 16.34 | 77.23 | 88.45 |
方法E | 20.21 | 84.77 | 94.38 |
Table 5 Ablation experiment results/%
方法 | 容错率 | ||
---|---|---|---|
0% | 10% | 20% | |
方法A | 19.77 | 82.33 | 93.47 |
方法B | 18.07 | 78.75 | 90.58 |
方法C | 18.29 | 74.10 | 87.07 |
方法D | 16.34 | 77.23 | 88.45 |
方法E | 20.21 | 84.77 | 94.38 |
[1] | QI C R, SU H, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[2] | HUANG X S, MEI G F, ZHANG J, et al. A comprehensive survey on point cloud registration[EB/OL]. [2024-05-20]. https://arxiv.org/abs/2103.02690. |
[3] | 郑太雄, 黄帅, 李永福, 等. 基于视觉的三维重建关键技术研究综述[J]. 自动化学报, 2020, 46(4): 631-652. |
ZHENG T X, HUANG S, LI Y F, et al. Key techniques for vision based 3D reconstruction: a review[J]. Acta Automatica Sinica, 2020, 46(4): 631-652 (in Chinese). | |
[4] | 李美佳, 于泽宽, 刘晓, 等. 点云算法在医学领域的研究进展[J]. 中国图象图形学报, 2020, 25(10): 2013-2023. |
LI M J, YU Z K, LIU X, et al. Progress of point cloud algorithm in medical field[J]. Journal of Image and Graphics, 2020, 25(10): 2013-2023 (in Chinese). | |
[5] | BADUE C, GUIDOLINI R, CARNEIRO R V, et al. Self-driving cars: a survey[J]. Expert Systems with Applications, 2021, 165: 113816. |
[6] | ZHANG Z Y, DAI Y C, SUN J D. Deep learning based point cloud registration: an overview[J]. Virtual Reality & Intelligent Hardware, 2020, 2(3): 222-246. |
[7] | 秦红星, 刘镇涛, 谭博元. 深度学习刚性点云配准前沿进展[J]. 中国图象图形学报, 2022, 27(2): 329-348. |
QIN H X, LIU Z T, TAN B Y. Review on deep learning rigid point cloud registration[J]. Journal of Image and Graphics, 2022, 27(2): 329-348 (in Chinese). | |
[8] | MONJI-AZAD S, HESSER J, LÖW N. A review of non-rigid transformations and learning-based 3D point cloud registration methods[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 196: 58-72. |
[9] | WANG L J, FANG Y. Coherent point drift networks: unsupervised learning of non-rigid point set registration[EB/OL]. [2024-05-20]. https://arxiv.org/abs/1906.03039v1. |
[10] | QIN Z, YU H, WANG C J, et al. Geotransformer: fast and robust point cloud registration with geometric transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9806-9821. |
[11] | CHAO J J, ENGIN S, HÄNI N, et al. Category-level global camera pose estimation with multi-hypothesis point cloud correspondences[C]// 2023 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2023: 3800-3807. |
[12] | WEBER M, WILD D, KLEESIEK J, et al. Deep learning-based point cloud registration for augmented reality-guided surgery[C]// 2024 IEEE International Symposium on Biomedical Imaging. New York: IEEE Press, 2024: 1-5. |
[13] | BESL P J, MCKAY N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. |
[14] |
盛敏, 彭玉升, 苏本跃, 等. 基于特征相似性的RGBD点云配准[J]. 图学学报, 2019, 40(5): 829-834.
DOI |
SHENG M, PENG Y S, SU B Y, et al. RGBD point cloud registration based on feature similarity[J]. Journal of Graphics, 2019, 40(5): 829-834 (in Chinese).
DOI |
|
[15] |
赵夫群, 周明全, 耿国华. 基于局部特征的点云配准算法[J]. 图学学报, 2018, 39(3): 389-394.
DOI |
ZHAO F Q, ZHOU M Q, GENG G H. Point cloud registration algorithm based on local features[J]. Journal of Graphics, 2018, 39(3): 389-394 (in Chinese).
DOI |
|
[16] | 刘荧, 李垚辰, 刘跃虎, 等. 基于相关熵的多视角彩色点云配准[J]. 图学学报, 2021, 42(2): 256-262. |
LIU Y, LI Y C, LIU Y H, et al. Multi-view color point cloud registration based on correntropy[J]. Journal of Graphics, 2021, 42(2): 256-262 (in Chinese). | |
[17] | WU Y Q, CHEN X Y, HUANG X, et al. Unsupervised distribution-aware keypoints generation from 3D point clouds[J]. Neural Networks, 2024, 173: 106158. |
[18] | ZHANG Y S, WANG Y, CHEN X H, et al. Spectral-spatial feature extraction with dual graph autoencoder for hyperspectral image clustering[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(12): 8500-8511. |
[19] | WANG Y, SOLOMON J. PRNet: self-supervised learning for partial-to-partial registration[C]// The 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 791. |
[20] | WANG Y, SOLOMON J. Deep closest point: learning representations for point cloud registration[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 3522-3531. |
[21] | ZENG Y M, QIAN Y, ZHU Z Y, et al. CorrNet3D: unsupervised end-to-end learning of dense correspondence for 3D point clouds[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 6048-6057. |
[22] | MYRONENKO A, SONG X B, CARREIRA-PERPIÑÁN M Á. Non-rigid point set registration: coherent point drift[C]// The 19th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2006: 1009-1016. |
[23] | LIU X Y, QI C R, GUIBAS L J, et al. FlowNet3D: learning scene flow in 3D point clouds[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 529-537. |
[24] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// The 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
[25] | SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]// The 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 3104-3112. |
[26] | WU Y Q, HAN F, ZHANG D J, et al. Unsupervised non-rigid point cloud registration based on point-wise displacement learning[J]. Multimedia Tools and Applications, 2024, 83(8): 24589-24607. |
[27] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics (TOG), 2019, 38(5): 146. |
[28] | ZHANG Z Y, SUN J D, DAI Y C, et al. End-to-end learning the partial permutation matrix for robust 3D point cloud registration[C]// The 36th AAAI conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 3399-3407. |
[29] | SINKHORN R. A relationship between arbitrary positive matrices and stochastic matrices[J]. Canadian Journal of Mathematics, 1966, 18: 303-306. |
[30] | KUHN H W. The Hungarian method for the assignment problem[J]. Naval Research Logistics Quarterly, 1955, 2(1/2): 83-97. |
[31] | SONG C Y, WEI J C, LI R B, et al. Unsupervised 3D pose transfer with cross consistency and dual reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10488-10499. |
[32] | HU X B, ZHANG D J, CHEN J Z, et al. NrtNet: an unsupervised method for 3D non-rigid point cloud registration based on transformer[J]. Sensors, 2022, 22(14): 5128. |
[33] | GROUEIX T, FISHER M, KIM V G, et al. 3D-CODED: 3D correspondences by deep deformation[C]// The 15th European Conference on Computer Vision. Cham: Springer, 2018: 235-251. |
[34] | DONATI N, SHARMA A, OVSJANIKOV M. Deep geometric functional maps: robust feature learning for shape correspondence[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 8589-8598. |
[35] | BEDNÁRIK J, FUA P, SALZMANN M. Learning to reconstruct texture-less deformable surfaces from a single view[C]// 2018 International Conference on 3D Vision. New York: IEEE Press, 2018: 606-615. |
[36] | BOOKSTEIN F L. Principal warps: thin-plate splines and the decomposition of deformations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11(6): 567-585. |
[37] | LI X, WANG L J, FANG Y, et al. PC-Net: unsupervised point correspondence learning with neural networks[C]// 2019 International Conference on 3D Vision. New York: IEEE Press, 2019: 145-154. |
[38] | LI Y, HARADA T. Lepard: learning partial point cloud matching in rigid and deformable scenes[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5544-5554. |
[39] | LI C, XUE A S, XIA C K, et al. Learning sufficient correlations among points for 3D non-rigid point cloud registration[C]// The 9th International Conference on Control, Automation and Robotics. New York: IEEE Press, 2023: 342-349. |
[1] | YUAN Chao, ZHAO Mingxue, ZHANG Fengyi, FENG Xiaoyong, LI Bing, CHEN Rui. Point cloud feature enhanced 3D object detection in complex indoor scenes [J]. Journal of Graphics, 2025, 46(1): 59-69. |
[2] | WANG Zongji, LIU Yunfei, LU Feng. Cloud Sphere: a 3D shape representation method via progressive deformation [J]. Journal of Graphics, 2024, 45(6): 1375-1388. |
[3] | WANG Peng, XIN Peikang, LIU Yin, YU Fangqiang. Extracting node center coordinates of point clouds in reticulated shell structure using least squares method [J]. Journal of Graphics, 2024, 45(1): 183-190. |
[4] | HAN Yazhen, YIN Mengxiao, MA Weizhao, YANG Shigeng, HU Jinfei, ZHU Congyang. DGOA: point cloud upsampling based on dynamic graph and offset attention [J]. Journal of Graphics, 2024, 45(1): 219-229. |
[5] | ZHOU Rui-chuang, TIAN Jin, YAN Feng-ting, ZHU Tian-xiao, ZHANG Yu-jin. Point cloud classification model incorporating external attention and graph convolution [J]. Journal of Graphics, 2023, 44(6): 1162-1172. |
[6] | WANG Ke-xin, JIN Ying-han, ZHANG Dong-liang. Virtual glasses try-on using a depth camera [J]. Journal of Graphics, 2023, 44(5): 988-996. |
[7] | LIU Yan, XIONG You-yi, HAN Miao-miao, YANG Long. Geometric feature guided multi-level segmentation for object point clouds [J]. Journal of Graphics, 2023, 44(4): 755-763. |
[8] | ZHAO Yu-kun, REN Shuang, ZHANG Xin-yun. A 3D point cloud defense framework combined with adversarial examples detection and reconstruction [J]. Journal of Graphics, 2023, 44(3): 560-569. |
[9] | LIANG AO, LI Zhi-han, HUA Hai-yang. PointMLP-FD: a point cloud classification model based on multi-level adaptive downsampling [J]. Journal of Graphics, 2023, 44(1): 112-119. |
[10] | CHEN Ya-chao, FAN Yan-guo, YU Ding-feng, FAN Bo-wen. Adaptive bilateral filtering point cloud smoothing and IMLS evaluation method considering normal outliers [J]. Journal of Graphics, 2023, 44(1): 131-138. |
[11] | WANG Jia-dong, CAO Juan, CHEN Zhong-gui. Feature-preserving skeleton extraction algorithm for point clouds [J]. Journal of Graphics, 2023, 44(1): 146-157. |
[12] | CUI Zhen-dong , LI Zong-min, YANG Shu-lin , LIU Yu-jie , LI Hua. 3D object detection based on semantic segmentation guidance [J]. Journal of Graphics, 2022, 43(6): 1134-1142. |
[13] | WANG Hao, ZHENG De-hua, LIU Cun-tai, CHENG Yu-xiang, HU Chuang. Research on automatic extraction of spherical targets based on SHOT feature descriptor [J]. Journal of Graphics, 2022, 43(5): 849-857. |
[14] | HUANG Xiang, WANG Hong-xing, GU Xu, MENG Yue, WANG Hao-yu. A new 3D point clouds feature selection method using specific outliers optimization [J]. Journal of Graphics, 2022, 43(5): 884-891. |
[15] | LI Hai-peng, XU Dan, FU Yu-ting, LIU Yan-an, ZHANG Ting-ting. A scattered point cloud simplification algorithm based on FPFH feature extraction [J]. Journal of Graphics, 2022, 43(4): 599-607. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||