Journal of Graphics ›› 2025, Vol. 46 ›› Issue (4): 807-817.DOI: 10.11996/JG.j.2095-302X.2025040807
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
GUO Mingce1,2(), HUANG Bei1, CHENG Lechao3, WANG Zhangye1,2(
)
Received:
2024-08-01
Revised:
2025-03-10
Online:
2025-08-30
Published:
2025-08-11
Contact:
WANG Zhangye
About author:
First author contact:GUO Mingce (2001-), master student. His main research interests cover computer vision and computer graphics. E-mail:guomingce@zju.edu.cn
Supported by:
CLC Number:
GUO Mingce, HUANG Bei, CHENG Lechao, WANG Zhangye. Acceleration method for neural implicit surface reconstruction with joint point cloud priors[J]. Journal of Graphics, 2025, 46(4): 807-817.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025040807
Fig. 2 Adaptive pixel sampling heatmap ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
Fig. 3 Dheatmap adaptive pixel sampling ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
Fig. 4 Dfair adaptive pixel sampling ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
方法 | |
---|---|
NeRF[ | |
NeuS[ | |
本文 |
Table 1 The total number of sampling points participating in training for each method
方法 | |
---|---|
NeRF[ | |
NeuS[ | |
本文 |
参数 | 数值 |
---|---|
4 | |
Nray | 256 |
Steppri | 10000 |
τR | 0.5 |
Table 2 Training parameters for our method
参数 | 数值 |
---|---|
4 | |
Nray | 256 |
Steppri | 10000 |
τR | 0.5 |
方法 | Sunf | Simp | Sppg | Sout | Time |
---|---|---|---|---|---|
NeRF[ | 64 | 64 | - | - | 12 |
NeuS[ | 64 | 64 | - | 32 | 17 |
本文方法 | 32 | 32 | 16 | 32 | 11 |
Table 3 Sampling parameters and time cost for training each method
方法 | Sunf | Simp | Sppg | Sout | Time |
---|---|---|---|---|---|
NeRF[ | 64 | 64 | - | - | 12 |
NeuS[ | 64 | 64 | - | 32 | 17 |
本文方法 | 32 | 32 | 16 | 32 | 11 |
Fig. 8 Partial scene reconstruction of point clouds and images ((a) Dtu_scan 24[29]; (b) Dtu_scan 37[29]; (c) Dtu_scan 55[29]; (d) Dtu_scan 63[29]; (e) Truck[32]; (f) Ignatiu[32])
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 24.49 | 21.44 | 24.80 | 24.37 | 30.50 | 30.85 | 34.58 | 30.59 | 19.02 | 17.07 |
NeuS[ | 24.50 | 21.79 | 23.92 | 30.62 | 31.82 | 30.45 | 33.43 | 34.38 | 21.14 | 20.03 |
本文方法 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 | 21.40 | 20.25 |
Table 4 Peak signal-to-noise ratio of each method and scenario experiment
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 24.49 | 21.44 | 24.80 | 24.37 | 30.50 | 30.85 | 34.58 | 30.59 | 19.02 | 17.07 |
NeuS[ | 24.50 | 21.79 | 23.92 | 30.62 | 31.82 | 30.45 | 33.43 | 34.38 | 21.14 | 20.03 |
本文方法 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 | 21.40 | 20.25 |
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 0.779 7 | 0.725 0 | 0.745 3 | 0.898 1 | 0.848 9 | 0.909 6 | 0.926 7 | 0.857 6 | 0.556 3 | 0.399 9 |
NeuS[ | 0.789 3 | 0.744 5 | 0.735 9 | 0.940 9 | 0.896 4 | 0.881 4 | 0.900 5 | 0.907 2 | 0.675 0 | 0.560 2 |
本文方法 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 | 0.681 1 | 0.568 1 |
Table 5 Structural similarity of each method and scenario experiment
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 0.779 7 | 0.725 0 | 0.745 3 | 0.898 1 | 0.848 9 | 0.909 6 | 0.926 7 | 0.857 6 | 0.556 3 | 0.399 9 |
NeuS[ | 0.789 3 | 0.744 5 | 0.735 9 | 0.940 9 | 0.896 4 | 0.881 4 | 0.900 5 | 0.907 2 | 0.675 0 | 0.560 2 |
本文方法 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 | 0.681 1 | 0.568 1 |
Fig. 10 Comparison of surface reconstruction effects of various testing scenarios using different methods ((a) Real images; (b) NeRF[2]; (c) NeuS[26]; (d) Ours)
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 25.57 | 21.45 | 23.76 | 30.25 | 31.15 | 30.25 | 32.66 | 32.62 |
Dheatmap | 26.06 | 21.89 | 24.55 | 31.79 | 32.33 | 31.29 | 33.91 | 33.45 |
Dfair | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.02 | 33.53 |
Table 6 Peak signal-to-noise ratio of each pixel sample strategy
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 25.57 | 21.45 | 23.76 | 30.25 | 31.15 | 30.25 | 32.66 | 32.62 |
Dheatmap | 26.06 | 21.89 | 24.55 | 31.79 | 32.33 | 31.29 | 33.91 | 33.45 |
Dfair | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.02 | 33.53 |
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 0.804 6 | 0.785 2 | 0.774 6 | 0.948 7 | 0.901 7 | 0.895 8 | 0.907 8 | 0.915 4 |
Dheatmap | 0.810 0 | 0.794 3 | 0.783 6 | 0.952 8 | 0.903 5 | 0.899 5 | 0.908 0 | 0.916 3 |
Dfair | 0.811 3 | 0.793 2 | 0.784 4 | 0.953 6 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
Table 7 Structural similarity of each pixel sample strategy
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 0.804 6 | 0.785 2 | 0.774 6 | 0.948 7 | 0.901 7 | 0.895 8 | 0.907 8 | 0.915 4 |
Dheatmap | 0.810 0 | 0.794 3 | 0.783 6 | 0.952 8 | 0.903 5 | 0.899 5 | 0.908 0 | 0.916 3 |
Dfair | 0.811 3 | 0.793 2 | 0.784 4 | 0.953 6 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 25.83 | 21.68 | 24.82 | 30.22 | 31.72 | 30.98 | 33.79 | 34.02 |
10 000 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
20 000 | 26.26 | 22.27 | 25.47 | 32.08 | 32.43 | 31.49 | 34.28 | 34.53 |
Table 8 Peak signal-to-noise ratio of different prior optimization steps
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 25.83 | 21.68 | 24.82 | 30.22 | 31.72 | 30.98 | 33.79 | 34.02 |
10 000 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
20 000 | 26.26 | 22.27 | 25.47 | 32.08 | 32.43 | 31.49 | 34.28 | 34.53 |
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 0.810 5 | 0.800 8 | 0.782 1 | 0.952 7 | 0.901 2 | 0.898 5 | 0.906 8 | 0.909 6 |
10 000 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
20 000 | 0.813 0 | 0.809 2 | 0.786 8 | 0.956 3 | 0.909 2 | 0.902 4 | 0.912 8 | 0.918 3 |
50 000 | 0.812 5 | 0.810 1 | 0.785 6 | 0.955 5 | 0.910 1 | 0.901 1 | 0.913 7 | 0.918 2 |
Table 9 Structural similarity of different prior optimization steps
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 0.810 5 | 0.800 8 | 0.782 1 | 0.952 7 | 0.901 2 | 0.898 5 | 0.906 8 | 0.909 6 |
10 000 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
20 000 | 0.813 0 | 0.809 2 | 0.786 8 | 0.956 3 | 0.909 2 | 0.902 4 | 0.912 8 | 0.918 3 |
50 000 | 0.812 5 | 0.810 1 | 0.785 6 | 0.955 5 | 0.910 1 | 0.901 1 | 0.913 7 | 0.918 2 |
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 25.34 | 21.93 | 24.95 | 29.92 | 31.25 | 30.43 | 32.93 | 33.23 |
12 | 25.52 | 22.74 | 25.10 | 30.76 | 31.80 | 30.87 | 33.76 | 33.83 |
16 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
Table 10 Peak signal-to-noise ratio of different sample number from prior point cloud guide
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 25.34 | 21.93 | 24.95 | 29.92 | 31.25 | 30.43 | 32.93 | 33.23 |
12 | 25.52 | 22.74 | 25.10 | 30.76 | 31.80 | 30.87 | 33.76 | 33.83 |
16 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 0.802 1 | 0.792 1 | 0.782 1 | 0.934 1 | 0.892 3 | 0.883 6 | 0.889 6 | 0.897 2 |
12 | 0.808 4 | 0.793 4 | 0.783 4 | 0.947 4 | 0.903 5 | 0.890 4 | 0.898 7 | 0.908 4 |
16 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
Table 11 Structural similarity of different sample number from prior point cloud guide
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 0.802 1 | 0.792 1 | 0.782 1 | 0.934 1 | 0.892 3 | 0.883 6 | 0.889 6 | 0.897 2 |
12 | 0.808 4 | 0.793 4 | 0.783 4 | 0.947 4 | 0.903 5 | 0.890 4 | 0.898 7 | 0.908 4 |
16 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
[1] | MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106. |
[2] | YANG M D, CHAO C F, HUANG K S, et al. Image-based 3D scene reconstruction and exploration in augmented reality[J]. Automation in Construction, 2013, 33: 48-60. |
[3] | MA Z L, LIU S L. A review of 3D reconstruction techniques in civil engineering and their applications[J]. Advanced Engineering Informatics, 2018, 37: 163-174. |
[4] | LU Y J, WANG S, FAN S S, et al. Image-based 3D reconstruction for multi-scale civil and infrastructure projects: a review from 2012 to 2022 with new perspective from deep learning methods[J]. Advanced Engineering Informatics, 2024, 59: 102268. |
[5] | WANG P, LIU L J, LIU Y, et al. NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction[C]// The 35th International Conference on Neural Information Processing System. Red Hook: Curran Associates Inc., 2021: 2081. |
[6] | LI Z S, MÜLLER T, EVANS A, et al. Neuralangelo: high-fidelity neural surface reconstruction[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 8456-8465. |
[7] | SUN J M, CHEN X, WANG Q Q, et al. Neural 3D reconstruction in the wild[C]// ACM SIGGRAPH 2022 Conference Proceedings. New York: ACM, 2022: 26. |
[8] | HORÉ A, ZIOU D. Image quality metrics:PSNR vs. SSIM[C]// 2010 20th International Conference on Pattern Recognition. New York: IEEE Press, 2010: 2366-2369. |
[9] | DELAUNAY B. Sur la sphere vide[J]. Izv Akad Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk, 1934, 7: 793-800. |
[10] | LEE D T, SCHACHTER B J. Two algorithms for constructing a Delaunay triangulation[J]. International Journal of Computer & Information Sciences, 1980, 9(3): 219-242. |
[11] | BERNARDINI F, MITTLEMAN J, RUSHMEIER H, et al. The ball-pivoting algorithm for surface reconstruction[J]. IEEE Transactions on Visualization and Computer Graphics, 1999, 5(4): 349-359. |
[12] | KAZHDAN M, BOLITHO M, HOPPE H. Poisson surface reconstruction[C]// The 4th Eurographics Symposium on Geometry Processing. Goslar: Eurographics Association, 2006: 61-70. |
[13] | HOPPE H, DEROSE T, DUCHAMP T, et al. Surface reconstruction from unorganized points[C]// The 19th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1992: 71-78. |
[14] | LORENSEN W E, CLINE H E. Marching cubes: a high resolution 3D surface construction algorithm[C]// The 14th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1987: 163-169. |
[15] | SCHÖNBERGER J L, FRAHM J M. Structure-from-motion revisited[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 4104-4113. |
[16] | SCHÖNBERGER J L, ZHENG E L, FRAHM J M, et al. Pixelwise view selection for unstructured multi-view stereo[C]// The 14th European Conference on Computer Vision. Cham: Springer, 2016: 501-518. |
[17] | MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics, 2022, 41(4): 102. |
[18] | LI R L, TANCIK M, KANAZAWA A. NerfAcc: a general NeRF acceleration toolbox[EB/OL]. [2024-03-01]. https://arxiv.org/abs/2210.04847. |
[19] | SUN C, SUN M, CHEN H T. Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5449-5459. |
[20] | FRIDOVICH-KEIL S, YU A, TANCIK M, et al. Plenoxels: radiance fields without neural networks[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5491-5500. |
[21] | GARBIN S J, KOWALSKI M, JOHNSON M, et al. FastNeRF: high-fidelity neural rendering at 200FPS[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 14326-14335. |
[22] | BARRON J T, MILDENHALL B, TANCIK M, et al. Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5835-5844. |
[23] | PARK J J, FLORENCE P, STRAUB J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 165-174. |
[24] | YARIV L, KASTEN Y, MORAN D, et al. Multiview neural surface reconstruction by disentangling geometry and appearance[C]// The 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 2492-2502. |
[25] | WILLIAMS F, GOJCIC Z, KHAMIS S, et al. Neural fields as learnable kernels for 3D reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18479-18489. |
[26] | WILLIAMS F, TRAGER M, BRUNA J, et al. Neural splines: fitting 3D surfaces with infinitely-wide neural networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 9944-9953. |
[27] | HUANG J H, GOJCIC Z, ATZMON M, et al. Neural kernel surface reconstruction[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 4369-4379. |
[28] | WANG Y M, HAN Q, HABERMANN M, et al. NeuS2: fast learning of neural implicit surfaces for multi-view reconstruction[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 3272-3283. |
[29] | JENSEN R, DAHL A, VOGIATZIS G, et al. Large scale multi-view stereopsis evaluation[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2014: 406-413. |
[30] | KINGMA D P, BA J. Adam: a method for stochastic optimization[EB/OL]. [2024-04-01]. https://arxiv.org/abs/1412.6980. |
[31] |
WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
DOI PMID |
[32] | KNAPITSCH A, PARK J, ZHOU Q Y, et al. Tanks and temples: benchmarking large-scale scene reconstruction[J]. ACM Transactions on Graphics, 2017, 36(4): 78. |
[1] | MA Yang, HUANG Lujie, PENG Weilong, WU Zhize, TANG Keke, FANG Meie. CLIP-based semantic offset transferable attacks on 3D point clouds [J]. Journal of Graphics, 2025, 46(3): 588-601. |
[2] | LIU Hongshuo, BAI Jing, YAN Hao, LIN Gan. BGS-Net: fine-grained classification networks with balanced generalization and specialization for 3D point clouds [J]. Journal of Graphics, 2025, 46(3): 602-613. |
[3] | WANG Changchang, JIANG Kun, JIANG Kai, ZHANG Peng, SU Zhiyong. Feedback-based iterative sampling denoising framework for point clouds with high-level noise [J]. Journal of Graphics, 2025, 46(3): 614-624. |
[4] | LI Zhihuan, NING Xiaojuan, LV Zhiyong, SHI Zhenghao, JIN Haiyan, WANG Yinghui, ZHOU Wenming. DEMF-Net: dual-branch feature enhancement and multi-scale fusion for semantic segmentation of large-scale point clouds [J]. Journal of Graphics, 2025, 46(2): 259-269. |
[5] | FANG Chenghao, WANG Kangkan. 3D human pose and shape estimation from single-view point clouds with semi-supervised learning [J]. Journal of Graphics, 2025, 46(2): 393-401. |
[6] | LIU Shengjun, TAO Shanshan, WANG Haibo, LI Qinsong, LIU Xinru. High-precision reconstruction of swept surfaces with a planar path [J]. Journal of Graphics, 2025, 46(2): 425-436. |
[7] | WU Yiqi, HE Jiale, ZHANG Tiantian, ZHANG Dejun, LI Yanli, CHEN Yilin. Unsupervised 3D point cloud non-rigid registration based on multi-feature extraction and point correspondence [J]. Journal of Graphics, 2025, 46(1): 150-158. |
[8] | XIE Wenxiang, XU Weiwei. Active view selection for radiance fields using surface object points [J]. Journal of Graphics, 2025, 46(1): 179-187. |
[9] | YUAN Chao, ZHAO Mingxue, ZHANG Fengyi, FENG Xiaoyong, LI Bing, CHEN Rui. Point cloud feature enhanced 3D object detection in complex indoor scenes [J]. Journal of Graphics, 2025, 46(1): 59-69. |
[10] | WANG Zongji, LIU Yunfei, LU Feng. Cloud Sphere: a 3D shape representation method via progressive deformation [J]. Journal of Graphics, 2024, 45(6): 1375-1388. |
[11] | WANG Zhiru, CHANG Yuan, LU Peng, PAN Chengwei. A review on neural radiance fields acceleration [J]. Journal of Graphics, 2024, 45(1): 1-13. |
[12] | WANG Peng, XIN Peikang, LIU Yin, YU Fangqiang. Extracting node center coordinates of point clouds in reticulated shell structure using least squares method [J]. Journal of Graphics, 2024, 45(1): 183-190. |
[13] | HAN Yazhen, YIN Mengxiao, MA Weizhao, YANG Shigeng, HU Jinfei, ZHU Congyang. DGOA: point cloud upsampling based on dynamic graph and offset attention [J]. Journal of Graphics, 2024, 45(1): 219-229. |
[14] | ZHOU Rui-chuang, TIAN Jin, YAN Feng-ting, ZHU Tian-xiao, ZHANG Yu-jin. Point cloud classification model incorporating external attention and graph convolution [J]. Journal of Graphics, 2023, 44(6): 1162-1172. |
[15] | WANG Ke-xin, JIN Ying-han, ZHANG Dong-liang. Virtual glasses try-on using a depth camera [J]. Journal of Graphics, 2023, 44(5): 988-996. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||