Journal of Graphics ›› 2024, Vol. 45 ›› Issue (1): 219-229.DOI: 10.11996/JG.j.2095-302X.2024010219
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
HAN Yazhen1(), YIN Mengxiao1,2(
), MA Weizhao1, YANG Shigeng1, HU Jinfei1, ZHU Congyang1
Received:
2023-06-29
Accepted:
2023-10-27
Online:
2024-02-29
Published:
2024-02-29
Contact:
YIN Mengxiao (1978-), associate professor, Ph.D. Her main research interests cover computer graphics, digital geometry processing. E-mail:About author:
HAN Yazhen (1997-), master student. His main research interest covers point cloud processing. E-mail:2013301011@st.gxu.edu.cn
Supported by:
CLC Number:
HAN Yazhen, YIN Mengxiao, MA Weizhao, YANG Shigeng, HU Jinfei, ZHU Congyang. DGOA: point cloud upsampling based on dynamic graph and offset attention[J]. Journal of Graphics, 2024, 45(1): 219-229.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2024010219
Fig. 1 DGOA architecture, including three modules: local feature extraction (LFE), global feature extraction (GFE), and coordinate reconstruction (CR)
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 0.556 | 4.750 | 40.146 | 4.678 | 5.946 |
MPU[ | 0.298 | 4.700 | 30.534 | 2.855 | 5.180 |
PU-GAN[ | 0.280 | 4.640 | 26.243 | 2.330 | 4.431 |
PU-GCN[ | 0.258 | 1.885 | 24.460 | 2.721 | 3.542 |
Dis-PU[ | 0.260 | 2.104 | 25.312 | 2.480 | 3.521 |
SSAS[ | 0.264 | 2.320 | 25.027 | 2.625 | 3.462 |
Grad-PU[ | 0.245 | 2.369 | 23.348 | 1.893 | 2.875 |
Ours | 0.236 | 2.003 | 21.458 | 2.437 | 3.259 |
Table 1 Quantitative results on PU-GAN dataset
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 0.556 | 4.750 | 40.146 | 4.678 | 5.946 |
MPU[ | 0.298 | 4.700 | 30.534 | 2.855 | 5.180 |
PU-GAN[ | 0.280 | 4.640 | 26.243 | 2.330 | 4.431 |
PU-GCN[ | 0.258 | 1.885 | 24.460 | 2.721 | 3.542 |
Dis-PU[ | 0.260 | 2.104 | 25.312 | 2.480 | 3.521 |
SSAS[ | 0.264 | 2.320 | 25.027 | 2.625 | 3.462 |
Grad-PU[ | 0.245 | 2.369 | 23.348 | 1.893 | 2.875 |
Ours | 0.236 | 2.003 | 21.458 | 2.437 | 3.259 |
Fig. 6 Qualitative results on PU-GAN dataset ((a) Input; (b) PU-Net[3]; (c) MPU[10]; (d) PU-GAN[11]; (e) PU-GCN[12]; (f) Dis-PU[15]; (g) SSAS[38]; (h) Grad-PU[52]; (i) Ours; (j) GT)
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 1.155 | 15.170 | 91.487 | 4.834 | 6.799 |
MPU[ | 0.935 | 13.327 | 77.401 | 3.551 | 5.970 |
PU-GAN[ | 0.873 | 12.146 | 68.534 | 3.189 | 5.682 |
PU-GCN[ | 0.585 | 7.577 | 55.570 | 2.499 | 4.004 |
Dis-PU[ | 0.541 | 8.348 | 53.687 | 2.964 | 5.209 |
SSAS[ | 0.613 | 7.451 | 68.970 | 2.474 | 6.088 |
Grad-PU[ | 0.403 | 3.743 | 55.487 | 1.480 | 2.468 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
Table 2 Quantitative results on PU1K dataset
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
PU-Net[ | 1.155 | 15.170 | 91.487 | 4.834 | 6.799 |
MPU[ | 0.935 | 13.327 | 77.401 | 3.551 | 5.970 |
PU-GAN[ | 0.873 | 12.146 | 68.534 | 3.189 | 5.682 |
PU-GCN[ | 0.585 | 7.577 | 55.570 | 2.499 | 4.004 |
Dis-PU[ | 0.541 | 8.348 | 53.687 | 2.964 | 5.209 |
SSAS[ | 0.613 | 7.451 | 68.970 | 2.474 | 6.088 |
Grad-PU[ | 0.403 | 3.743 | 55.487 | 1.480 | 2.468 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
Fig. 7 Qualitative results on PU1K dataset ((a) Input; (b) PU-Net[3]; (c) MPU[10]; (d) PU-GAN[11]; (e) PU-GCN[12]; (f) Dis-PU[15]; (g) SSAS[38]; (h) Grad-PU[52]; (i) Ours; (j) GT)
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
模型1 | 0.827 | 17.622 | 54.275 | 7.080 | 8.666 |
模型2 | 0.785 | 16.560 | 55.281 | 6.147 | 8.048 |
模型3 | 0.511 | 4.169 | 49.188 | 3.890 | 3.961 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
Table 3 Quantitative results of ablation experiments
网络 | CD↓ | HD↓ | EMD↓ | P2F(avg)↓ | P2F(std)↓ |
---|---|---|---|---|---|
模型1 | 0.827 | 17.622 | 54.275 | 7.080 | 8.666 |
模型2 | 0.785 | 16.560 | 55.281 | 6.147 | 8.048 |
模型3 | 0.511 | 4.169 | 49.188 | 3.890 | 3.961 |
Ours | 0.413 | 3.184 | 47.452 | 2.364 | 2.413 |
模型 | K | ||||
---|---|---|---|---|---|
10 | 15 | 20 | 25 | 30 | |
CD↓ | 0.589 | 0.532 | 0.413 | 0.489 | 0.603 |
Table 4 Effect of K on Dynamic Graph
模型 | K | ||||
---|---|---|---|---|---|
10 | 15 | 20 | 25 | 30 | |
CD↓ | 0.589 | 0.532 | 0.413 | 0.489 | 0.603 |
数据集 | 参数/kb ↓ | 时间/s ↓ |
---|---|---|
PU-Net[ | 814.3 | 0.566 |
Dis-PU[ | 1047.0 | 1.604 |
Grad-PU[ | 67.1 | 0.384 |
Ours | 2802.0 | 0.987 |
Table 5 Comparison of Parameter Count and Inference Time
数据集 | 参数/kb ↓ | 时间/s ↓ |
---|---|---|
PU-Net[ | 814.3 | 0.566 |
Dis-PU[ | 1047.0 | 1.604 |
Grad-PU[ | 67.1 | 0.384 |
Ours | 2802.0 | 0.987 |
[1] | LUO L Q, TANG L L, ZHOU W Y, et al. PU-EVA: an edge-vector based approximation solution for flexible-scale point cloud upsampling[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2022: 16188-16197. |
[2] | LIU Y L, WANG Y M, LIU Y. Refine-PU: a graph convolutional point cloud upsampling network using spatial refinement[C]// 2022 IEEE International Conference on Visual Communications and Image Processing. New York: IEEE Press, 2023: 1-5. |
[3] | YU L Q, LI X Z, FU C W, et al. PU-net: point cloud upsampling network[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 2790-2799. |
[4] | HUANG H, WU S H, GONG M L, et al. Edge-aware point set resampling[J]. ACM Transactions on Graphics, 32(1): 9:1-9:12. |
[5] | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[6] | LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 105-114. |
[7] | SHI W Z, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 1874-1883. |
[8] | WU H K, ZHANG J G, HUANG K Q. Point cloud super resolution with adversarial residual graph networks[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1908.02111.pdf. |
[9] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 5105-5114. |
[10] | WANG Y F, WU S H, HUANG H, et al. Patch-based progressive 3D point set upsampling[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 5951-5960. |
[11] | LI R H, LI X Z, FU C W, et al. PU-GAN: a point cloud upsampling adversarial network[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 7202-7211. |
[12] | QIAN G C, ABUALSHOUR A, LI G H, et al. PU-GCN: point cloud upsampling using graph convolutional networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 11678-11687. |
[13] | SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2015: 1-9. |
[14] | WANG Y E, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1-12. |
[15] | LI R H, LI X Z, HENG P A, et al. Point cloud upsampling via disentangled refinement[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 344-353. |
[16] | QIU S, ANWAR S, BARNES N. PU-transformer: point cloud upsampling transformer[M]//Computer Vision - ACCV 2022. Cham: Springer Nature Switzerland, 2023: 326-343. |
[17] |
GUO M H, CAI J X, LIU Z N, et al. PCT: point cloud transformer[J]. Computational Visual Media, 2021, 7(2): 187-199.
DOI |
[18] | AUBRY M, SCHLICKEWEI U, CREMERS D. The wave kernel signature: a quantum mechanical approach to shape analysis[C]// 2011 IEEE International Conference on Computer Vision Workshops. New York: IEEE Press, 2012: 1626-1633. |
[19] | CHEN D Y, TIAN X P, SHEN Y T, et al. On visual similarity based 3D model retrieval[C]// Computer graphics forum. Oxford, UK: Blackwell Publishing, Inc, 2003, 22(3): 223-232. |
[20] | SU H, MAJI S, KALOGERAKIS E, et al. Multi-view convolutional neural networks for 3D shape recognition[C]// 2015 IEEE International Conference on Computer Vision. New York: IEEE Press, 2016: 945-953. |
[21] | MATURANA D, SCHERER S. VoxNet: a 3D Convolutional Neural Network for real-time object recognition[C]// 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2015: 922-928. |
[22] | LI Y Y, BU R, SUN M C, et al. Pointcnn: convolution on x-transformed points[EB/OL]. [2023-04-10]. https://proceedings.neurips.cc/paper_files/paper/2018/file/f5f8590cd58a54e94377e6ae2eded4d9-Paper.pdf. |
[23] | THOMAS H, QI C R, DESCHAUD J E, et al. KPConv: flexible and deformable convolution for point clouds[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 6410-6419. |
[24] | LI J X, CHEN B M, LEE G H. SO-net: self-organizing network for point cloud analysis[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 9397-9406. |
[25] | SHEN Y R, FENG C, YANG Y Q, et al. Mining point cloud local structures by kernel correlation and graph pooling[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 4548-4557. |
[26] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010. |
[27] | WANG F, JIANG M Q, QIAN C, et al. Residual attention network for image classification[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6450-6458. |
[28] | ZHAO Y F, HUI L, XIE J. SSPU-net: self-supervised point cloud upsampling via differentiable rendering[C]// The 29th ACM International Conference on Multimedia. New York: ACM, 2021: 2214-2223. |
[29] |
ZHANG Y, ZHAO W H, SUN B, et al. Point cloud upsampling algorithm: a systematic review[J]. Algorithms, 2022, 15(4): 124.
DOI URL |
[30] | YU L Q, LI X Z, FU C W, et al. EC-net: an edge-aware point set consolidation network[C]// European Conference on Computer Vision. Cham: Springer, 2018: 398-414. |
[31] |
YE S Q, CHEN D D, HAN S F, et al. Meta-PU: an arbitrary-scale upsampling network for point cloud[J]. IEEE Transactions on Visualization and Computer Graphics, 2022, 28(9): 3206-3218.
DOI URL |
[32] | HU X C, MU H Y, ZHANG X Y, et al. Meta-SR: a magnification-arbitrary network for super-resolution[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 1575-1584. |
[33] |
LIU H, YUAN H, HOU J H, et al. PUFA-GAN: a frequency-aware generative adversarial network for 3D point cloud upsampling[J]. IEEE Transactions on Image Processing, 2022, 31: 7389-7402.
DOI URL |
[34] | QIAN Y E, HOU J H, KWONG S, et al. PUGeo-net: a geometry-centric network for 3D point cloud upsampling[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 752-769. |
[35] |
QIAN Y, HOU J H, KWONG S, et al. Deep magnification-flexible upsampling over 3D point clouds[J]. IEEE Transactions on Image Processin, 2021, 30: 8354-8367.
DOI URL |
[36] | FENG W Q, LI J, CAI H R, et al. Neural points: point cloud representation with neural fields for arbitrary upsampling[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18612-18621. |
[37] | ZHOU K Y, DONG M, ARSLANTURK S. “zero-shot” point cloud upsampling[C]// 2022 IEEE International Conference on Multimedia and Expo. New York: IEEE Press, 2022: 1-6. |
[38] | ZHAO W B, LIU X M, ZHONG Z W, et al. Self-supervised arbitrary-scale point clouds upsampling via implicit neural representation[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 1989-1997. |
[39] |
LIU X H, LIU X C, LIU Y S, et al. SPU-net: self-supervised point cloud upsampling by coarse-to-fine reconstruction with self-projection optimization[J]. IEEE Transactions on Image Processing, 2022, 31: 4213-4226.
DOI PMID |
[40] | LONG C, ZHANG W X, LI R H, et al. PC2-PU: patch correlation and point correlation for effective point cloud upsampling[C]// The 30th ACM International Conference on Multimedia. New York: ACM, 2022: 2191-2201. |
[41] |
BAI Y C, WANG X G, JR M H A, et al. BIMS-PU: Bi-directional and multi-scale point cloud upsampling[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 7447-7454.
DOI URL |
[42] | SHARMA R, SCHWANDT T, KUNERT C, et al. Point cloud upsampling and normal estimation using deep learning for robust surface reconstruction[EB/OL]. [2023-04-19]. https://arxiv.org/abs/2102.13391.pdf. |
[43] | ZHANG T, FILIN S. Deep-learning-based point cloud upsampling of natural entities and scenes[J]. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2022, XLIII-B2-2022: 321-327. |
[44] |
LI Z Z, LI G, LI T H, et al. Semantic point cloud upsampling[J]. IEEE Transactions on Multimedia, 2023, 25: 3432-3442.
DOI URL |
[45] | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015: 234-241. |
[46] | BRUNA J, ZAREMBA W, SZLAM A, et al. Spectral networks and locally connected networks on graphs[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1312.6203.pdf. |
[47] | YANG Y Q, FENG C, SHEN Y R, et al. FoldingNet: point cloud auto-encoder via deep grid deformation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 206-215. |
[48] | MIRZA M, OSINDERO S. Conditional generative adversarial nets[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1411.1784.pdf. |
[49] | GROUEIX T, FISHER M, KIM V G, et al. A papier-Mache approach to learning 3D surface generation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 216-224. |
[50] | DE DEUGE M, QUADROS A, HUNG C, et al. Unsupervised feature learning for classification of outdoor 3D Scans[EB/OL]. [2023-04-19]. https://www.researchgate.net/publication/288425434_Unsupervised_feature_learning_for_classification_of_outdoor_3D_Scans. |
[51] | CHANG A X, FUNKHOUSER T, GUIBAS L, et al. ShapeNet: an information-rich 3D model repository[EB/OL]. [2023-04-19]. https://arxiv.org/abs/1512.03012.pdf. |
[52] | HE Y, TANG D H, ZHANG Y D, et al. Grad-PU: arbitrary-scale point cloud upsampling via gradient descent with learned distance functions[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 5354-5363. |
[1] | HU Xin, CHANG Yashu, QIN Hao, XIAO Jian, CHENG Hongliang. Binocular ranging method based on improved YOLOv8 and GMM image point set matching [J]. Journal of Graphics, 2024, 45(4): 714-725. |
[2] | NIU Weihua, GUO Xun. Rotating target detection algorithm in ship remote sensing images based on YOLOv8 [J]. Journal of Graphics, 2024, 45(4): 726-735. |
[3] | LI Tao, HU Ting, WU Dandan. Monocular depth estimation combining pyramid structure and attention mechanism [J]. Journal of Graphics, 2024, 45(3): 454-463. |
[4] | ZHU Guanghui, MIAO Jun, HU Hongli, SHEN Ji, DU Ronghua. 3D piece-wise planar reconstruction from a single indoor image based on self-augmented -attention mechanism [J]. Journal of Graphics, 2024, 45(3): 464-471. |
[5] | WANG Zhiru, CHANG Yuan, LU Peng, PAN Chengwei. A review on neural radiance fields acceleration [J]. Journal of Graphics, 2024, 45(1): 1-13. |
[6] | WANG Xinyu, LIU Hui, ZHU Jicheng, SHENG Yurui, ZHANG Caiming. Deep multimodal medical image fusion network based on high-low frequency feature decomposition [J]. Journal of Graphics, 2024, 45(1): 65-77. |
[7] | LI Jiaqi, WANG Hui, GUO Yu. Classification and segmentation network based on Transformer for triangular mesh [J]. Journal of Graphics, 2024, 45(1): 78-89. |
[8] | WANG Peng, XIN Peikang, LIU Yin, YU Fangqiang. Extracting node center coordinates of point clouds in reticulated shell structure using least squares method [J]. Journal of Graphics, 2024, 45(1): 183-190. |
[9] | WANG Jiang’an, HUANG Le, PANG Dawei, QIN Linzhen, LIANG Wenqian. Dense point cloud reconstruction network based on adaptive aggregation recurrent recursion [J]. Journal of Graphics, 2024, 45(1): 230-239. |
[10] | ZHOU Rui-chuang, TIAN Jin, YAN Feng-ting, ZHU Tian-xiao, ZHANG Yu-jin. Point cloud classification model incorporating external attention and graph convolution [J]. Journal of Graphics, 2023, 44(6): 1162-1172. |
[11] | WANG Ji, WANG Sen, JIANG Zhi-wen, XIE Zhi-feng, LI Meng-tian. Zero-shot text-driven avatar generation based on depth-conditioned diffusion model [J]. Journal of Graphics, 2023, 44(6): 1218-1226. |
[12] | YANG Chen-cheng, DONG Xiu-cheng, HOU Bing, ZHANG Dang-cheng, XIANG Xian-ming, FENG Qi-ming. Reference based transformer texture migrates depth images super resolution reconstruction [J]. Journal of Graphics, 2023, 44(5): 861-867. |
[13] | DANG Hong-she, XU Huai-biao, ZHANG Xuan-de. Deep learning stereo matching algorithm fusing structural information [J]. Journal of Graphics, 2023, 44(5): 899-906. |
[14] | ZHAI Yong-jie, GUO Cong-bin, WANG Qian-ming, ZHAO Kuan, BAI Yun-shan, ZHANG Ji. Multi-fitting detection method for transmission lines based on implicit spatial knowledge fusion [J]. Journal of Graphics, 2023, 44(5): 918-927. |
[15] | YANG Hong-ju, GAO Min, ZHANG Chang-you, BO Wen, WU Wen-jia, CAO Fu-yuan. A local optimization generation model for image inpainting [J]. Journal of Graphics, 2023, 44(5): 955-965. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||