图学学报 ›› 2024, Vol. 45 ›› Issue (1): 219-229.DOI: 10.11996/JG.j.2095-302X.2024010219
韩亚振1(), 尹梦晓1,2(
), 马伟钊1, 杨诗耕1, 胡锦飞1, 朱丛洋1
收稿日期:
2023-06-29
接受日期:
2023-10-27
出版日期:
2024-02-29
发布日期:
2024-02-29
通讯作者:
尹梦晓(1978-),女,副教授,博士。主要研究方向为计算机图形学、数字几何处理。E-mail:ymx@gxu.edu.cn第一作者:
韩亚振(1997-),男,硕士研究生。主要研究方向为点云处理。E-mail:2013301011@st.gxu.edu.cn
基金资助:
HAN Yazhen1(), YIN Mengxiao1,2(
), MA Weizhao1, YANG Shigeng1, HU Jinfei1, ZHU Congyang1
Received:
2023-06-29
Accepted:
2023-10-27
Published:
2024-02-29
Online:
2024-02-29
First author:
HAN Yazhen (1997-), master student. His main research interest covers point cloud processing. E-mail:2013301011@st.gxu.edu.cn
Supported by:
摘要:
由三维扫描设备直接得到的点云经常是稀疏、不均匀、有噪声的,因而点云上采样在点云重建、渲染等领域扮演了越来越关键的角色。为此提出了一种新的基于动态图和偏移注意力的点云上采样网络DGOA,主要包含局部特征提取(LFE)、全局特征提取(GFE)和坐标重建(CR) 3个模块。LFE采用多层结构提取邻域信息,每层基于特征相似性构建动态图,可以在特征空间自适应的将点云分组,增大感受野,获得长距离的语义信息,更好的建模点云的局部几何形状。GFE采用基于拉普拉斯算子的偏移注意力使每个点都能获得点云的全局信息,使生成点云的细节与原始点云一致,减少噪声的影响。CR借鉴FoldingNet操作,避免生成点的聚集。此外,整个网络与输入点云中点的顺序无关,具有置换不变性。在多个数据集的定量与定性实验结果表明,该方法优于其他方法,并且具有良好的泛化性和稳定性。
中图分类号:
韩亚振, 尹梦晓, 马伟钊, 杨诗耕, 胡锦飞, 朱丛洋. DGOA:基于动态图和偏移注意力的点云上采样[J]. 图学学报, 2024, 45(1): 219-229.
HAN Yazhen, YIN Mengxiao, MA Weizhao, YANG Shigeng, HU Jinfei, ZHU Congyang. DGOA: point cloud upsampling based on dynamic graph and offset attention[J]. Journal of Graphics, 2024, 45(1): 219-229.
图1 DGOA整体结构图,包含局部特征提取、全局特征提取、坐标重建3个模块
Fig. 1 DGOA architecture, including three modules: local feature extraction (LFE), global feature extraction (GFE), and coordinate reconstruction (CR)
图5 坐标重建模块的具体操作,将点云从特征空间映射回坐标空间
Fig. 5 Details of coordinate reconstruction, mapping the point cloud back from the feature space to the coordinate space
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 0.556 | 4.750 | 40.146 | 4.678 | 5.946 |
MPU[ | 0.298 | 4.700 | 30.534 | 2.855 | 5.180 |
PU-GAN[ | 0.280 | 4.640 | 26.243 | 2.330 | 4.431 |
PU-GCN[ | 0.258 | 1.885 | 24.460 | 2.721 | 3.542 |
Dis-PU[ | 0.260 | 2.104 | 25.312 | 2.480 | 3.521 |
SSAS[ | 0.264 | 2.320 | 25.027 | 2.625 | 3.462 |
Grad-PU[ | 0.245 | 2.369 | 23.348 | 1.893 | 2.875 |
Ours | 0.236 | 2.003 | 21.458 | 2.437 | 3.259 |
表1 在PU-GAN数据集上的定量比较
Table 1 Quantitative results on PU-GAN dataset
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 0.556 | 4.750 | 40.146 | 4.678 | 5.946 |
MPU[ | 0.298 | 4.700 | 30.534 | 2.855 | 5.180 |
PU-GAN[ | 0.280 | 4.640 | 26.243 | 2.330 | 4.431 |
PU-GCN[ | 0.258 | 1.885 | 24.460 | 2.721 | 3.542 |
Dis-PU[ | 0.260 | 2.104 | 25.312 | 2.480 | 3.521 |
SSAS[ | 0.264 | 2.320 | 25.027 | 2.625 | 3.462 |
Grad-PU[ | 0.245 | 2.369 | 23.348 | 1.893 | 2.875 |
Ours | 0.236 | 2.003 | 21.458 | 2.437 | 3.259 |
图6 在PU-GAN数据集上的定性比较
Fig. 6 Qualitative results on PU-GAN dataset ((a) Input; (b) PU-Net[3]; (c) MPU[10]; (d) PU-GAN[11]; (e) PU-GCN[12]; (f) Dis-PU[15]; (g) SSAS[38]; (h) Grad-PU[52]; (i) Ours; (j) GT)
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 1.155 | 15.170 | 91.487 | 4.834 | 6.799 |
MPU[ | 0.935 | 13.327 | 77.401 | 3.551 | 5.970 |
PU-GAN[ | 0.873 | 12.146 | 68.534 | 3.189 | 5.682 |
PU-GCN[ | 0.585 | 7.577 | 55.570 | 2.499 | 4.004 |
Dis-PU[ | 0.541 | 8.348 | 53.687 | 2.964 | 5.209 |
SSAS[ | 0.613 | 7.451 | 68.970 | 2.474 | 6.088 |
Grad-PU[ | 0.403 | 3.743 | 55.487 | 1.480 | 2.468 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
表2 在PU1K数据集上的定量比较
Table 2 Quantitative results on PU1K dataset
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 1.155 | 15.170 | 91.487 | 4.834 | 6.799 |
MPU[ | 0.935 | 13.327 | 77.401 | 3.551 | 5.970 |
PU-GAN[ | 0.873 | 12.146 | 68.534 | 3.189 | 5.682 |
PU-GCN[ | 0.585 | 7.577 | 55.570 | 2.499 | 4.004 |
Dis-PU[ | 0.541 | 8.348 | 53.687 | 2.964 | 5.209 |
SSAS[ | 0.613 | 7.451 | 68.970 | 2.474 | 6.088 |
Grad-PU[ | 0.403 | 3.743 | 55.487 | 1.480 | 2.468 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
图7 在PU1K数据集上的定性比较
Fig. 7 Qualitative results on PU1K dataset ((a) Input; (b) PU-Net[3]; (c) MPU[10]; (d) PU-GAN[11]; (e) PU-GCN[12]; (f) Dis-PU[15]; (g) SSAS[38]; (h) Grad-PU[52]; (i) Ours; (j) GT)
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
模型1 | 0.827 | 17.622 | 54.275 | 7.080 | 8.666 |
模型2 | 0.785 | 16.560 | 55.281 | 6.147 | 8.048 |
模型3 | 0.511 | 4.169 | 49.188 | 3.890 | 3.961 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
表3 消融实验的定量比较
Table 3 Quantitative results of ablation experiments
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
模型1 | 0.827 | 17.622 | 54.275 | 7.080 | 8.666 |
模型2 | 0.785 | 16.560 | 55.281 | 6.147 | 8.048 |
模型3 | 0.511 | 4.169 | 49.188 | 3.890 | 3.961 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
图9 消融实验的定性比较((a)输入;(b)模型1;(c)模型2;(d)模型3;(e)本文;(d) GT)
Fig. 9 Qualitative results of ablation experiments ((a) Input; (b) Model 1; (c) Model 2; (d) Model 3; (e) Ours; (d) GT)
模型 | K | ||||
---|---|---|---|---|---|
10 | 15 | 20 | 25 | 30 | |
CD↓ | 0.589 | 0.532 | 0.413 | 0.489 | 0.603 |
表4 动态图邻域大小的影响
Table 4 Effect of K on Dynamic Graph
模型 | K | ||||
---|---|---|---|---|---|
10 | 15 | 20 | 25 | 30 | |
CD↓ | 0.589 | 0.532 | 0.413 | 0.489 | 0.603 |
数据集 | 参数/kb ↓ | 时间/s ↓ |
---|---|---|
PU-Net[ | 814.3 | 0.566 |
Dis-PU[ | 1047.0 | 1.604 |
Grad-PU[ | 67.1 | 0.384 |
Ours | 2802.0 | 0.987 |
表5 参数量和推理时间对比
Table 5 Comparison of Parameter Count and Inference Time
数据集 | 参数/kb ↓ | 时间/s ↓ |
---|---|---|
PU-Net[ | 814.3 | 0.566 |
Dis-PU[ | 1047.0 | 1.604 |
Grad-PU[ | 67.1 | 0.384 |
Ours | 2802.0 | 0.987 |
[1] | LUO L Q, TANG L L, ZHOU W Y, et al. PU-EVA: an edge-vector based approximation solution for flexible-scale point cloud upsampling[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2022: 16188-16197. |
[2] | LIU Y L, WANG Y M, LIU Y. Refine-PU: a graph convolutional point cloud upsampling network using spatial refinement[C]// 2022 IEEE International Conference on Visual Communications and Image Processing. New York: IEEE Press, 2023: 1-5. |
[3] | YU L Q, LI X Z, FU C W, et al. PU-net: point cloud upsampling network[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 2790-2799. |
[4] | HUANG H, WU S H, GONG M L, et al. Edge-aware point set resampling[J]. ACM Transactions on Graphics, 32(1): 9:1-9:12. |
[5] | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[6] | LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 105-114. |
[7] | SHI W Z, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 1874-1883. |
[8] | WU H K, ZHANG J G, HUANG K Q. Point cloud super resolution with adversarial residual graph networks[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1908.02111.pdf. |
[9] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 5105-5114. |
[10] | WANG Y F, WU S H, HUANG H, et al. Patch-based progressive 3D point set upsampling[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 5951-5960. |
[11] | LI R H, LI X Z, FU C W, et al. PU-GAN: a point cloud upsampling adversarial network[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 7202-7211. |
[12] | QIAN G C, ABUALSHOUR A, LI G H, et al. PU-GCN: point cloud upsampling using graph convolutional networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 11678-11687. |
[13] | SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2015: 1-9. |
[14] | WANG Y E, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1-12. |
[15] | LI R H, LI X Z, HENG P A, et al. Point cloud upsampling via disentangled refinement[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 344-353. |
[16] | QIU S, ANWAR S, BARNES N. PU-transformer: point cloud upsampling transformer[M]//Computer Vision - ACCV 2022. Cham: Springer Nature Switzerland, 2023: 326-343. |
[17] |
GUO M H, CAI J X, LIU Z N, et al. PCT: point cloud transformer[J]. Computational Visual Media, 2021, 7(2): 187-199.
DOI |
[18] | AUBRY M, SCHLICKEWEI U, CREMERS D. The wave kernel signature: a quantum mechanical approach to shape analysis[C]// 2011 IEEE International Conference on Computer Vision Workshops. New York: IEEE Press, 2012: 1626-1633. |
[19] | CHEN D Y, TIAN X P, SHEN Y T, et al. On visual similarity based 3D model retrieval[C]// Computer graphics forum. Oxford, UK: Blackwell Publishing, Inc, 2003, 22(3): 223-232. |
[20] | SU H, MAJI S, KALOGERAKIS E, et al. Multi-view convolutional neural networks for 3D shape recognition[C]// 2015 IEEE International Conference on Computer Vision. New York: IEEE Press, 2016: 945-953. |
[21] | MATURANA D, SCHERER S. VoxNet: a 3D Convolutional Neural Network for real-time object recognition[C]// 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2015: 922-928. |
[22] | LI Y Y, BU R, SUN M C, et al. Pointcnn: convolution on x-transformed points[EB/OL]. [2023-04-10]. https://proceedings.neurips.cc/paper_files/paper/2018/file/f5f8590cd58a54e94377e6ae2eded4d9-Paper.pdf. |
[23] | THOMAS H, QI C R, DESCHAUD J E, et al. KPConv: flexible and deformable convolution for point clouds[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 6410-6419. |
[24] | LI J X, CHEN B M, LEE G H. SO-net: self-organizing network for point cloud analysis[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 9397-9406. |
[25] | SHEN Y R, FENG C, YANG Y Q, et al. Mining point cloud local structures by kernel correlation and graph pooling[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 4548-4557. |
[26] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010. |
[27] | WANG F, JIANG M Q, QIAN C, et al. Residual attention network for image classification[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6450-6458. |
[28] | ZHAO Y F, HUI L, XIE J. SSPU-net: self-supervised point cloud upsampling via differentiable rendering[C]// The 29th ACM International Conference on Multimedia. New York: ACM, 2021: 2214-2223. |
[29] |
ZHANG Y, ZHAO W H, SUN B, et al. Point cloud upsampling algorithm: a systematic review[J]. Algorithms, 2022, 15(4): 124.
DOI URL |
[30] | YU L Q, LI X Z, FU C W, et al. EC-net: an edge-aware point set consolidation network[C]// European Conference on Computer Vision. Cham: Springer, 2018: 398-414. |
[31] |
YE S Q, CHEN D D, HAN S F, et al. Meta-PU: an arbitrary-scale upsampling network for point cloud[J]. IEEE Transactions on Visualization and Computer Graphics, 2022, 28(9): 3206-3218.
DOI URL |
[32] | HU X C, MU H Y, ZHANG X Y, et al. Meta-SR: a magnification-arbitrary network for super-resolution[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 1575-1584. |
[33] |
LIU H, YUAN H, HOU J H, et al. PUFA-GAN: a frequency-aware generative adversarial network for 3D point cloud upsampling[J]. IEEE Transactions on Image Processing, 2022, 31: 7389-7402.
DOI URL |
[34] | QIAN Y E, HOU J H, KWONG S, et al. PUGeo-net: a geometry-centric network for 3D point cloud upsampling[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 752-769. |
[35] |
QIAN Y, HOU J H, KWONG S, et al. Deep magnification-flexible upsampling over 3D point clouds[J]. IEEE Transactions on Image Processin, 2021, 30: 8354-8367.
DOI URL |
[36] | FENG W Q, LI J, CAI H R, et al. Neural points: point cloud representation with neural fields for arbitrary upsampling[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18612-18621. |
[37] | ZHOU K Y, DONG M, ARSLANTURK S. “zero-shot” point cloud upsampling[C]// 2022 IEEE International Conference on Multimedia and Expo. New York: IEEE Press, 2022: 1-6. |
[38] | ZHAO W B, LIU X M, ZHONG Z W, et al. Self-supervised arbitrary-scale point clouds upsampling via implicit neural representation[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 1989-1997. |
[39] |
LIU X H, LIU X C, LIU Y S, et al. SPU-net: self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization[J]. IEEE Transactions on Image Processing, 2022, 31: 4213-4226.
DOI PMID |
[40] | LONG C, ZHANG W X, LI R H, et al. PC2-PU: patch correlation and point correlation for effective point cloud upsampling[C]// The 30th ACM International Conference on Multimedia. New York: ACM, 2022: 2191-2201. |
[41] |
BAI Y C, WANG X G, JR M H A, et al. BIMS-PU: Bi-directional and multi-scale point cloud upsampling[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 7447-7454.
DOI URL |
[42] | SHARMA R, SCHWANDT T, KUNERT C, et al. Point cloud upsampling and normal estimation using deep learning for robust surface reconstruction[EB/OL]. [2023-04-19]. https://arxiv.org/abs/2102.13391.pdf. |
[43] | ZHANG T, FILIN S. Deep-learning-based point cloud upsampling of natural entities and scenes[J]. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2022, XLIII-B2-2022: 321-327. |
[44] |
LI Z Z, LI G, LI T H, et al. Semantic point cloud upsampling[J]. IEEE Transactions on Multimedia, 2023, 25: 3432-3442.
DOI URL |
[45] | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015: 234-241. |
[46] | BRUNA J, ZAREMBA W, SZLAM A, et al. Spectral networks and locally connected networks on graphs[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1312.6203.pdf. |
[47] | YANG Y Q, FENG C, SHEN Y R, et al. FoldingNet: point cloud auto-encoder via deep grid deformation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 206-215. |
[48] | MIRZA M, OSINDERO S. Conditional generative adversarial nets[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1411.1784.pdf. |
[49] | GROUEIX T, FISHER M, KIM V G, et al. A papier-Mache approach to learning 3D surface generation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 216-224. |
[50] | DE DEUGE M, QUADROS A, HUNG C, et al. Unsupervised feature learning for classification of outdoor 3D Scans[EB/OL]. [2023-04-19]. https://www.researchgate.net/publication/288425434_Unsupervised_feature_learning_for_classification_of_outdoor_3D_Scans. |
[51] | CHANG A X, FUNKHOUSER T, GUIBAS L, et al. ShapeNet: an information-rich 3D model repository[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1512.03012.pdf. |
[52] | HE Y, TANG D H, ZHANG Y D, et al. Grad-PU: arbitrary-scale point cloud upsampling via gradient descent with learned distance functions[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 5354-5363. |
[1] | 胡欣, 常娅姝, 秦皓, 肖剑, 程鸿亮. 基于改进YOLOv8和GMM图像点集匹配的双目测距方法[J]. 图学学报, 2024, 45(4): 714-725. |
[2] | 牛为华, 郭迅. 基于改进YOLOv8的船舰遥感图像旋转目标检测算法[J]. 图学学报, 2024, 45(4): 726-735. |
[3] | 李滔, 胡婷, 武丹丹. 结合金字塔结构和注意力机制的单目深度估计[J]. 图学学报, 2024, 45(3): 454-463. |
[4] | 朱光辉, 缪君, 胡宏利, 申基, 杜荣华. 基于自增强注意力机制的室内单图像分段平面三维重建[J]. 图学学报, 2024, 45(3): 464-471. |
[5] | 王稚儒, 常远, 鲁鹏, 潘成伟. 神经辐射场加速算法综述[J]. 图学学报, 2024, 45(1): 1-13. |
[6] | 王欣雨, 刘慧, 朱积成, 盛玉瑞, 张彩明. 基于高低频特征分解的深度多模态医学图像融合网络[J]. 图学学报, 2024, 45(1): 65-77. |
[7] | 李佳琦, 王辉, 郭宇. 基于Transformer的三角形网格分类分割网络[J]. 图学学报, 2024, 45(1): 78-89. |
[8] | 王鹏, 辛佩康, 刘寅, 余芳强. 利用最小二乘法的网壳结构点云节点中心坐标提取[J]. 图学学报, 2024, 45(1): 183-190. |
[9] | 王江安, 黄乐, 庞大为, 秦林珍, 梁温茜. 基于自适应聚合循环递归的稠密点云重建网络[J]. 图学学报, 2024, 45(1): 230-239. |
[10] | 周锐闯, 田瑾, 闫丰亭, 朱天晓, 张玉金. 融合外部注意力和图卷积的点云分类模型[J]. 图学学报, 2023, 44(6): 1162-1172. |
[11] | 王吉, 王森, 蒋智文, 谢志峰, 李梦甜. 基于深度条件扩散模型的零样本文本驱动虚拟人生成方法[J]. 图学学报, 2023, 44(6): 1218-1226. |
[12] | 杨陈成, 董秀成, 侯兵, 张党成, 向贤明, 冯琪茗. 基于参考的Transformer纹理迁移深度图像超分辨率重建[J]. 图学学报, 2023, 44(5): 861-867. |
[13] | 党宏社, 许怀彪, 张选德. 融合结构信息的深度学习立体匹配算法[J]. 图学学报, 2023, 44(5): 899-906. |
[14] | 翟永杰, 郭聪彬, 王乾铭, 赵宽, 白云山, 张冀. 基于隐含空间知识融合的输电线路多金具检测方法[J]. 图学学报, 2023, 44(5): 918-927. |
[15] | 杨红菊, 高敏, 张常有, 薄文, 武文佳, 曹付元. 一种面向图像修复的局部优化生成模型[J]. 图学学报, 2023, 44(5): 955-965. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||