图学学报 ›› 2025, Vol. 46 ›› Issue (1): 150-158.DOI: 10.11996/JG.j.2095-302X.2025010150
吴亦奇1(), 何嘉乐1, 张甜甜1, 张德军1, 李艳丽1,2, 陈壹林3(
)
收稿日期:
2024-07-09
接受日期:
2024-09-22
出版日期:
2025-02-28
发布日期:
2025-02-14
通讯作者:
陈壹林(1989-),男,讲师,博士。主要研究方向为人工智能。E-mail:yilinchen@wit.edu.cn第一作者:
吴亦奇(1985-),男,副教授,博士。主要研究方向为图形图像处理。E-mail:wuyq@cug.edu.cn
基金资助:
WU Yiqi1(), HE Jiale1, ZHANG Tiantian1, ZHANG Dejun1, LI Yanli1,2, CHEN Yilin3(
)
Received:
2024-07-09
Accepted:
2024-09-22
Published:
2025-02-28
Online:
2025-02-14
Contact:
CHEN Yilin (1989-), lecturer, Ph.D. His main research interest covers artificial intelligence. E-mail:yilinchen@wit.edu.cnFirst author:
WU Yiqi (1985-), associate professor, Ph.D. His main research interests cover graphic and image processing. E-mail:wuyq@cug.edu.cn
Supported by:
摘要:
为实现非刚点云间的精确配准,并在配准过程中准确建立点对应关系,提出了一种基于多重特征提取和点对应关系建模的无监督三维点云非刚配准网络。网络由多重特征提取、匹配精细化和形状感知注意力模块构成。首先,提取输入的源点云与目标点云的多重特征,并计算特征之间的相似度获得特征相似度矩阵。随后,将特征相似矩阵输入到网络中的匹配精细化模块中使用软硬匹配结合的方法生成点对应关系矩阵。最后,将目标点云的特征、源点云和点对应关系矩阵输入形状感知注意力模块,得到最终配准结果。通过此方法,配准结果可以同时具有与目标点云的点对应关系和形状相似性。在公共数据集及合成数据集上进行实验,可视化效果及定量结果比较表明,该方法可准确获得源点云与目标点云间的点对应关系和形状相似性,有效实现无监督三维点云非刚配准。
中图分类号:
吴亦奇, 何嘉乐, 张甜甜, 张德军, 李艳丽, 陈壹林. 基于多重特征提取和点对应关系的三维点云非刚配准[J]. 图学学报, 2025, 46(1): 150-158.
WU Yiqi, HE Jiale, ZHANG Tiantian, ZHANG Dejun, LI Yanli, CHEN Yilin. Unsupervised 3D point cloud non-rigid registration based on multi-feature extraction and point correspondence[J]. Journal of Graphics, 2025, 46(1): 150-158.
方法 | 容错率 | CD | ||
---|---|---|---|---|
0% | 10% | 20% | ||
CPD-Net[ | 0.33 | 6.82 | 24.90 | 0.004 8 |
FlowNet3D[ | 1.21 | 19.76 | 41.35 | 0.004 6 |
CorrNet3D[ | 2.05 | 25.68 | 48.86 | 0.002 6 |
NrtNet[ | 2.69 | 30.04 | 51.88 | - |
HCDNet3D[ | 2.53 | 31.89 | 54.27 | - |
本文方法 | 7.44 | 38.12 | 54.89 | 0.001 3 |
表1 不同容错率下点对应关系准确率/%和CD值
Table 1 Point correspondence accuracy/% and CD results under different tolerance rates
方法 | 容错率 | CD | ||
---|---|---|---|---|
0% | 10% | 20% | ||
CPD-Net[ | 0.33 | 6.82 | 24.90 | 0.004 8 |
FlowNet3D[ | 1.21 | 19.76 | 41.35 | 0.004 6 |
CorrNet3D[ | 2.05 | 25.68 | 48.86 | 0.002 6 |
NrtNet[ | 2.69 | 30.04 | 51.88 | - |
HCDNet3D[ | 2.53 | 31.89 | 54.27 | - |
本文方法 | 7.44 | 38.12 | 54.89 | 0.001 3 |
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 0.001 4 | 0.003 0 | 0.004 2 | 0.004 7 |
FlowNet3D[ | 0.001 6 | 0.003 4 | 0.004 3 | 0.005 2 |
CorrNet3D[ | 0.001 5 | 0.002 9 | 0.004 1 | 0.005 1 |
文献[26] | 0.001 4 | 0.001 8 | 0.001 8 | 0.002 7 |
本文方法 | 0.001 4 | 0.001 5 | 0.001 5 | 0.001 6 |
表2 不同变形率下非刚配准方法的CD对比
Table 2 Comparison of chamfer distance for non-rigid registration methods under different deformation rates
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 0.001 4 | 0.003 0 | 0.004 2 | 0.004 7 |
FlowNet3D[ | 0.001 6 | 0.003 4 | 0.004 3 | 0.005 2 |
CorrNet3D[ | 0.001 5 | 0.002 9 | 0.004 1 | 0.005 1 |
文献[26] | 0.001 4 | 0.001 8 | 0.001 8 | 0.002 7 |
本文方法 | 0.001 4 | 0.001 5 | 0.001 5 | 0.001 6 |
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 41.12 | 33.26 | 28.34 | 25.65 |
FlowNet3D[ | 94.78 | 86.51 | 78.57 | 76.43 |
CorrNet3D[ | 97.67 | 96.60 | 94.94 | 94.07 |
本文方法 | 99.54 | 99.40 | 98.84 | 98.43 |
表3 不同变形率下在20%容错率下的点对应关系准确率对比/%
Table 3 Comparison of point correspondence accuracy under 20% tolerance rate at different deformation levels/%
方法 | 变形率 | |||
---|---|---|---|---|
0.3 | 0.5 | 0.7 | 0.8 | |
CPD-Net[ | 41.12 | 33.26 | 28.34 | 25.65 |
FlowNet3D[ | 94.78 | 86.51 | 78.57 | 76.43 |
CorrNet3D[ | 97.67 | 96.60 | 94.94 | 94.07 |
本文方法 | 99.54 | 99.40 | 98.84 | 98.43 |
图6 不同形变率下的配准可视化结果((a) 0.3变形率;(b) 0.5变形率;(c) 0.7变形率;(d) 0.8变形率)
Fig. 6 Registration visualization results under different deformation rates ((a) 0.3 deformation rate; (b) 0.5 deformation rate; (c) 0.7 deformation rate; (d) 0.8 deformation rate)
方法 | 多重特征提取 | 匹配精细化 | 形状感知注意力 |
---|---|---|---|
方法A | √ | √ | |
方法B | √ | √ | |
方法C | √ | ||
方法D | √ | ||
方法E | √ | √ | √ |
表4 消融实验设置
Table 4 Ablation experiment setup
方法 | 多重特征提取 | 匹配精细化 | 形状感知注意力 |
---|---|---|---|
方法A | √ | √ | |
方法B | √ | √ | |
方法C | √ | ||
方法D | √ | ||
方法E | √ | √ | √ |
方法 | 容错率 | ||
---|---|---|---|
0% | 10% | 20% | |
方法A | 19.77 | 82.33 | 93.47 |
方法B | 18.07 | 78.75 | 90.58 |
方法C | 18.29 | 74.10 | 87.07 |
方法D | 16.34 | 77.23 | 88.45 |
方法E | 20.21 | 84.77 | 94.38 |
表5 消融实验结果/%
Table 5 Ablation experiment results/%
方法 | 容错率 | ||
---|---|---|---|
0% | 10% | 20% | |
方法A | 19.77 | 82.33 | 93.47 |
方法B | 18.07 | 78.75 | 90.58 |
方法C | 18.29 | 74.10 | 87.07 |
方法D | 16.34 | 77.23 | 88.45 |
方法E | 20.21 | 84.77 | 94.38 |
[1] | QI C R, SU H, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[2] | HUANG X S, MEI G F, ZHANG J, et al. A comprehensive survey on point cloud registration[EB/OL]. [2024-05-20]. https://arxiv.org/abs/2103.02690. |
[3] | 郑太雄, 黄帅, 李永福, 等. 基于视觉的三维重建关键技术研究综述[J]. 自动化学报, 2020, 46(4): 631-652. |
ZHENG T X, HUANG S, LI Y F, et al. Key techniques for vision based 3D reconstruction: a review[J]. Acta Automatica Sinica, 2020, 46(4): 631-652 (in Chinese). | |
[4] | 李美佳, 于泽宽, 刘晓, 等. 点云算法在医学领域的研究进展[J]. 中国图象图形学报, 2020, 25(10): 2013-2023. |
LI M J, YU Z K, LIU X, et al. Progress of point cloud algorithm in medical field[J]. Journal of Image and Graphics, 2020, 25(10): 2013-2023 (in Chinese). | |
[5] | BADUE C, GUIDOLINI R, CARNEIRO R V, et al. Self-driving cars: a survey[J]. Expert Systems with Applications, 2021, 165: 113816. |
[6] | ZHANG Z Y, DAI Y C, SUN J D. Deep learning based point cloud registration: an overview[J]. Virtual Reality & Intelligent Hardware, 2020, 2(3): 222-246. |
[7] | 秦红星, 刘镇涛, 谭博元. 深度学习刚性点云配准前沿进展[J]. 中国图象图形学报, 2022, 27(2): 329-348. |
QIN H X, LIU Z T, TAN B Y. Review on deep learning rigid point cloud registration[J]. Journal of Image and Graphics, 2022, 27(2): 329-348 (in Chinese). | |
[8] | MONJI-AZAD S, HESSER J, LÖW N. A review of non-rigid transformations and learning-based 3D point cloud registration methods[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 196: 58-72. |
[9] | WANG L J, FANG Y. Coherent point drift networks: unsupervised learning of non-rigid point set registration[EB/OL]. [2024-05-20]. https://arxiv.org/abs/1906.03039v1. |
[10] | QIN Z, YU H, WANG C J, et al. Geotransformer: fast and robust point cloud registration with geometric transformer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9806-9821. |
[11] | CHAO J J, ENGIN S, HÄNI N, et al. Category-level global camera pose estimation with multi-hypothesis point cloud correspondences[C]// 2023 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2023: 3800-3807. |
[12] | WEBER M, WILD D, KLEESIEK J, et al. Deep learning-based point cloud registration for augmented reality-guided surgery[C]// 2024 IEEE International Symposium on Biomedical Imaging. New York: IEEE Press, 2024: 1-5. |
[13] | BESL P J, MCKAY N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. |
[14] |
盛敏, 彭玉升, 苏本跃, 等. 基于特征相似性的RGBD点云配准[J]. 图学学报, 2019, 40(5): 829-834.
DOI |
SHENG M, PENG Y S, SU B Y, et al. RGBD point cloud registration based on feature similarity[J]. Journal of Graphics, 2019, 40(5): 829-834 (in Chinese).
DOI |
|
[15] |
赵夫群, 周明全, 耿国华. 基于局部特征的点云配准算法[J]. 图学学报, 2018, 39(3): 389-394.
DOI |
ZHAO F Q, ZHOU M Q, GENG G H. Point cloud registration algorithm based on local features[J]. Journal of Graphics, 2018, 39(3): 389-394 (in Chinese).
DOI |
|
[16] | 刘荧, 李垚辰, 刘跃虎, 等. 基于相关熵的多视角彩色点云配准[J]. 图学学报, 2021, 42(2): 256-262. |
LIU Y, LI Y C, LIU Y H, et al. Multi-view color point cloud registration based on correntropy[J]. Journal of Graphics, 2021, 42(2): 256-262 (in Chinese). | |
[17] | WU Y Q, CHEN X Y, HUANG X, et al. Unsupervised distribution-aware keypoints generation from 3D point clouds[J]. Neural Networks, 2024, 173: 106158. |
[18] | ZHANG Y S, WANG Y, CHEN X H, et al. Spectral-spatial feature extraction with dual graph autoencoder for hyperspectral image clustering[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(12): 8500-8511. |
[19] | WANG Y, SOLOMON J. PRNet: self-supervised learning for partial-to-partial registration[C]// The 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 791. |
[20] | WANG Y, SOLOMON J. Deep closest point: learning representations for point cloud registration[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 3522-3531. |
[21] | ZENG Y M, QIAN Y, ZHU Z Y, et al. CorrNet3D: unsupervised end-to-end learning of dense correspondence for 3D point clouds[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 6048-6057. |
[22] | MYRONENKO A, SONG X B, CARREIRA-PERPIÑÁN M Á. Non-rigid point set registration: coherent point drift[C]// The 19th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2006: 1009-1016. |
[23] | LIU X Y, QI C R, GUIBAS L J, et al. FlowNet3D: learning scene flow in 3D point clouds[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 529-537. |
[24] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// The 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
[25] | SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]// The 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 3104-3112. |
[26] | WU Y Q, HAN F, ZHANG D J, et al. Unsupervised non-rigid point cloud registration based on point-wise displacement learning[J]. Multimedia Tools and Applications, 2024, 83(8): 24589-24607. |
[27] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics (TOG), 2019, 38(5): 146. |
[28] | ZHANG Z Y, SUN J D, DAI Y C, et al. End-to-end learning the partial permutation matrix for robust 3D point cloud registration[C]// The 36th AAAI conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 3399-3407. |
[29] | SINKHORN R. A relationship between arbitrary positive matrices and stochastic matrices[J]. Canadian Journal of Mathematics, 1966, 18: 303-306. |
[30] | KUHN H W. The Hungarian method for the assignment problem[J]. Naval Research Logistics Quarterly, 1955, 2(1/2): 83-97. |
[31] | SONG C Y, WEI J C, LI R B, et al. Unsupervised 3D pose transfer with cross consistency and dual reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10488-10499. |
[32] | HU X B, ZHANG D J, CHEN J Z, et al. NrtNet: an unsupervised method for 3D non-rigid point cloud registration based on transformer[J]. Sensors, 2022, 22(14): 5128. |
[33] | GROUEIX T, FISHER M, KIM V G, et al. 3D-CODED: 3D correspondences by deep deformation[C]// The 15th European Conference on Computer Vision. Cham: Springer, 2018: 235-251. |
[34] | DONATI N, SHARMA A, OVSJANIKOV M. Deep geometric functional maps: robust feature learning for shape correspondence[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 8589-8598. |
[35] | BEDNÁRIK J, FUA P, SALZMANN M. Learning to reconstruct texture-less deformable surfaces from a single view[C]// 2018 International Conference on 3D Vision. New York: IEEE Press, 2018: 606-615. |
[36] | BOOKSTEIN F L. Principal warps: thin-plate splines and the decomposition of deformations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1989, 11(6): 567-585. |
[37] | LI X, WANG L J, FANG Y, et al. PC-Net: unsupervised point correspondence learning with neural networks[C]// 2019 International Conference on 3D Vision. New York: IEEE Press, 2019: 145-154. |
[38] | LI Y, HARADA T. Lepard: learning partial point cloud matching in rigid and deformable scenes[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5544-5554. |
[39] | LI C, XUE A S, XIA C K, et al. Learning sufficient correlations among points for 3D non-rigid point cloud registration[C]// The 9th International Conference on Control, Automation and Robotics. New York: IEEE Press, 2023: 342-349. |
[1] | 苑朝, 赵明雪, 张丰羿, 冯晓勇, 李冰, 陈瑞. 基于点云特征增强的复杂室内场景3D目标检测[J]. 图学学报, 2025, 46(1): 59-69. |
[2] | 王宗继, 刘云飞, 陆峰. Cloud Sphere: 一种基于渐进式变形自编码的三维模型表征方法[J]. 图学学报, 2024, 45(6): 1375-1388. |
[3] | 王鹏, 辛佩康, 刘寅, 余芳强. 利用最小二乘法的网壳结构点云节点中心坐标提取[J]. 图学学报, 2024, 45(1): 183-190. |
[4] | 韩亚振, 尹梦晓, 马伟钊, 杨诗耕, 胡锦飞, 朱丛洋. DGOA:基于动态图和偏移注意力的点云上采样[J]. 图学学报, 2024, 45(1): 219-229. |
[5] | 周锐闯, 田瑾, 闫丰亭, 朱天晓, 张玉金. 融合外部注意力和图卷积的点云分类模型[J]. 图学学报, 2023, 44(6): 1162-1172. |
[6] | 王可欣, 金映含, 张东亮. 基于深度相机的虚拟眼镜试戴[J]. 图学学报, 2023, 44(5): 988-996. |
[7] | 刘妍, 熊游依, 韩妙妙, 杨龙. 几何特征引导的物体点云模型多层级分割[J]. 图学学报, 2023, 44(4): 755-763. |
[8] | 赵玉琨, 任爽, 张鑫云. 结合对抗样本检测和重构的三维点云防御框架[J]. 图学学报, 2023, 44(3): 560-569. |
[9] | 梁奥, 李峙含, 花海洋. PointMLP-FD:基于多级自适应下采样的点云分类模型[J]. 图学学报, 2023, 44(1): 112-119. |
[10] | 陈亚超, 樊彦国, 禹定峰, 樊博文. 考虑法向离群的自适应双边滤波点云平滑及IMLS评价方法[J]. 图学学报, 2023, 44(1): 131-138. |
[11] | 王佳栋, 曹娟, 陈中贵. 保特征的点云骨架提取算法[J]. 图学学报, 2023, 44(1): 146-157. |
[12] | 郭文, 李冬, 袁飞 . 多尺度注意力融合和抗噪声的轻量点云人脸识别模型[J]. 图学学报, 2022, 43(6): 1124-1133. |
[13] | 崔振东, 李宗民, 杨树林, 刘玉杰, 李华. 基于语义分割引导的三维目标检测[J]. 图学学报, 2022, 43(6): 1134-1142. |
[14] | 王浩, 郑德华, 刘存泰, 程宇翔, 胡创. 基于 SHOT 特征描述子的自动提取球形标靶方法研究 [J]. 图学学报, 2022, 43(5): 849-857. |
[15] | 黄祥, 王红星, 顾徐, 孟悦, 王浩羽. 一种新的基于特殊离群样本优化的三维点云特征 选择算法[J]. 图学学报, 2022, 43(5): 884-891. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||