图学学报 ›› 2025, Vol. 46 ›› Issue (4): 807-817.DOI: 10.11996/JG.j.2095-302X.2025040807
收稿日期:
2024-08-01
修回日期:
2025-03-10
出版日期:
2025-08-30
发布日期:
2025-08-11
通讯作者:
王章野(1965-),男,副教授,博士。主要研究方向为计算机图形学、虚拟现实、红外仿真等。E-mail:zywang@cad.zju.edu.cn第一作者:
郭铭策(2001-),男,硕士研究生。主要研究方向为计算机视觉、计算机图形学。E-mail:guomingce@zju.edu.cn
基金资助:
GUO Mingce1,2(), HUANG Bei1, CHENG Lechao3, WANG Zhangye1,2(
)
Received:
2024-08-01
Revised:
2025-03-10
Published:
2025-08-30
Online:
2025-08-11
First author:
GUO Mingce (2001-), master student. His main research interests cover computer vision and computer graphics. E-mail:guomingce@zju.edu.cn
Supported by:
摘要:
针对当前神经隐式表面重建任务中训练时间开销大的问题,提出了一种联合点云先验指导的采样方法,在保证表面重建质量的同时降低模型训练的时间成本。对神经隐式表面重建网络训练的加速分为3个方面,首先交替使用随机训练像素采样和基于点云投影密度的自适应训练像素采样,加速模型对待重建表面位置的优化;然后通过点云先验与采样像素邻接关系,对训练光线上接近表面的位置进行集中采样,减少重要性采样的数量和时间开销;此外结合稀疏点云先验损失优化符号距离场网络,并以一定迭代步长对点云缓存进行更新。对比实验选取了DTU和Tanks-and-Temples数据集中的10个测试场景,结果表明该方法可有效地减少神经隐式表面重建训练时间开销的同时保证表面重建的质量,与NeuS神经隐式表面重建方法相比,训练时间开销减少35%;在相同训练时间内,本方法预测新视角图像峰值信噪比(PSNR)相较于NeuS方法平均提高了3.1%,结构相似度(SSIM)平均提高了3.4%。
中图分类号:
郭铭策, 黄琲, 程乐超, 王章野. 联合点云先验的神经隐式表面重建加速方法[J]. 图学学报, 2025, 46(4): 807-817.
GUO Mingce, HUANG Bei, CHENG Lechao, WANG Zhangye. Acceleration method for neural implicit surface reconstruction with joint point cloud priors[J]. Journal of Graphics, 2025, 46(4): 807-817.
图2 自适应像素采样热力图((a) 粗化系数为1个像素单位;(b) 粗化系数为2个像素单位;(c) 粗化系数为4个像素单位;(d) 粗化系数为8个像素单位)
Fig. 2 Adaptive pixel sampling heatmap ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
图3 Dheatmap自适应像素采样((a) 粗化系数为1个像素单位;(b) 粗化系数为2个像素单位;(c) 粗化系数为4个像素单位;(d) 粗化系数为8个像素单位)
Fig. 3 Dheatmap adaptive pixel sampling ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
图4 Dfair自适应像素采样((a) 粗化系数为1个像素单位;(b) 粗化系数为2个像素单位;(c) 粗化系数为4个像素单位;(d) 粗化系数为8个像素单位)
Fig. 4 Dfair adaptive pixel sampling ((a) Coarsening coefficients of 1 pixel units; (b) Coarsening coefficients of 2 pixel units; (c) Coarsening coefficients of 4 pixel units; (d) Coarsening coefficients of 8 pixel units)
方法 | |
---|---|
NeRF[ | |
NeuS[ | |
本文 |
表1 各方法参与训练的采样点总数
Table 1 The total number of sampling points participating in training for each method
方法 | |
---|---|
NeRF[ | |
NeuS[ | |
本文 |
参数 | 数值 |
---|---|
4 | |
Nray | 256 |
Steppri | 10000 |
τR | 0.5 |
表2 本文方法训练参数
Table 2 Training parameters for our method
参数 | 数值 |
---|---|
4 | |
Nray | 256 |
Steppri | 10000 |
τR | 0.5 |
方法 | Sunf | Simp | Sppg | Sout | Time |
---|---|---|---|---|---|
NeRF[ | 64 | 64 | - | - | 12 |
NeuS[ | 64 | 64 | - | 32 | 17 |
本文方法 | 32 | 32 | 16 | 32 | 11 |
表3 各方法训练的采样参数以及时间开销
Table 3 Sampling parameters and time cost for training each method
方法 | Sunf | Simp | Sppg | Sout | Time |
---|---|---|---|---|---|
NeRF[ | 64 | 64 | - | - | 12 |
NeuS[ | 64 | 64 | - | 32 | 17 |
本文方法 | 32 | 32 | 16 | 32 | 11 |
图8 部分场景重建点云和图像
Fig. 8 Partial scene reconstruction of point clouds and images ((a) Dtu_scan 24[29]; (b) Dtu_scan 37[29]; (c) Dtu_scan 55[29]; (d) Dtu_scan 63[29]; (e) Truck[32]; (f) Ignatiu[32])
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 24.49 | 21.44 | 24.80 | 24.37 | 30.50 | 30.85 | 34.58 | 30.59 | 19.02 | 17.07 |
NeuS[ | 24.50 | 21.79 | 23.92 | 30.62 | 31.82 | 30.45 | 33.43 | 34.38 | 21.14 | 20.03 |
本文方法 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 | 21.40 | 20.25 |
表4 各方法、各场景实验的峰值信噪比
Table 4 Peak signal-to-noise ratio of each method and scenario experiment
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 24.49 | 21.44 | 24.80 | 24.37 | 30.50 | 30.85 | 34.58 | 30.59 | 19.02 | 17.07 |
NeuS[ | 24.50 | 21.79 | 23.92 | 30.62 | 31.82 | 30.45 | 33.43 | 34.38 | 21.14 | 20.03 |
本文方法 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 | 21.40 | 20.25 |
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 0.779 7 | 0.725 0 | 0.745 3 | 0.898 1 | 0.848 9 | 0.909 6 | 0.926 7 | 0.857 6 | 0.556 3 | 0.399 9 |
NeuS[ | 0.789 3 | 0.744 5 | 0.735 9 | 0.940 9 | 0.896 4 | 0.881 4 | 0.900 5 | 0.907 2 | 0.675 0 | 0.560 2 |
本文方法 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 | 0.681 1 | 0.568 1 |
表5 各方法、各场景实验的结构相似度
Table 5 Structural similarity of each method and scenario experiment
方法 | 场景 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | Truck | Ignatius | |
NeRF[ | 0.779 7 | 0.725 0 | 0.745 3 | 0.898 1 | 0.848 9 | 0.909 6 | 0.926 7 | 0.857 6 | 0.556 3 | 0.399 9 |
NeuS[ | 0.789 3 | 0.744 5 | 0.735 9 | 0.940 9 | 0.896 4 | 0.881 4 | 0.900 5 | 0.907 2 | 0.675 0 | 0.560 2 |
本文方法 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 | 0.681 1 | 0.568 1 |
图10 各方法测试场景的表面重建效果对比((a) 真实图像;(b) NeRF[2];(c) NeuS[26];(d) 本文方法)
Fig. 10 Comparison of surface reconstruction effects of various testing scenarios using different methods ((a) Real images; (b) NeRF[2]; (c) NeuS[26]; (d) Ours)
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 25.57 | 21.45 | 23.76 | 30.25 | 31.15 | 30.25 | 32.66 | 32.62 |
Dheatmap | 26.06 | 21.89 | 24.55 | 31.79 | 32.33 | 31.29 | 33.91 | 33.45 |
Dfair | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.02 | 33.53 |
表6 不同像素采样策略各场景峰值信噪比
Table 6 Peak signal-to-noise ratio of each pixel sample strategy
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 25.57 | 21.45 | 23.76 | 30.25 | 31.15 | 30.25 | 32.66 | 32.62 |
Dheatmap | 26.06 | 21.89 | 24.55 | 31.79 | 32.33 | 31.29 | 33.91 | 33.45 |
Dfair | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.02 | 33.53 |
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 0.804 6 | 0.785 2 | 0.774 6 | 0.948 7 | 0.901 7 | 0.895 8 | 0.907 8 | 0.915 4 |
Dheatmap | 0.810 0 | 0.794 3 | 0.783 6 | 0.952 8 | 0.903 5 | 0.899 5 | 0.908 0 | 0.916 3 |
Dfair | 0.811 3 | 0.793 2 | 0.784 4 | 0.953 6 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
表7 不同像素采样策略各场景结构相似度
Table 7 Structural similarity of each pixel sample strategy
采样策略 | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
Drandom | 0.804 6 | 0.785 2 | 0.774 6 | 0.948 7 | 0.901 7 | 0.895 8 | 0.907 8 | 0.915 4 |
Dheatmap | 0.810 0 | 0.794 3 | 0.783 6 | 0.952 8 | 0.903 5 | 0.899 5 | 0.908 0 | 0.916 3 |
Dfair | 0.811 3 | 0.793 2 | 0.784 4 | 0.953 6 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 25.83 | 21.68 | 24.82 | 30.22 | 31.72 | 30.98 | 33.79 | 34.02 |
10 000 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
20 000 | 26.26 | 22.27 | 25.47 | 32.08 | 32.43 | 31.49 | 34.28 | 34.53 |
表8 不同先验优化步数各场景峰值信噪比
Table 8 Peak signal-to-noise ratio of different prior optimization steps
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 25.83 | 21.68 | 24.82 | 30.22 | 31.72 | 30.98 | 33.79 | 34.02 |
10 000 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
20 000 | 26.26 | 22.27 | 25.47 | 32.08 | 32.43 | 31.49 | 34.28 | 34.53 |
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 0.810 5 | 0.800 8 | 0.782 1 | 0.952 7 | 0.901 2 | 0.898 5 | 0.906 8 | 0.909 6 |
10 000 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
20 000 | 0.813 0 | 0.809 2 | 0.786 8 | 0.956 3 | 0.909 2 | 0.902 4 | 0.912 8 | 0.918 3 |
50 000 | 0.812 5 | 0.810 1 | 0.785 6 | 0.955 5 | 0.910 1 | 0.901 1 | 0.913 7 | 0.918 2 |
表9 不同先验优化步数各场景结构相似度
Table 9 Structural similarity of different prior optimization steps
Steppri | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
0 | 0.810 5 | 0.800 8 | 0.782 1 | 0.952 7 | 0.901 2 | 0.898 5 | 0.906 8 | 0.909 6 |
10 000 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
20 000 | 0.813 0 | 0.809 2 | 0.786 8 | 0.956 3 | 0.909 2 | 0.902 4 | 0.912 8 | 0.918 3 |
50 000 | 0.812 5 | 0.810 1 | 0.785 6 | 0.955 5 | 0.910 1 | 0.901 1 | 0.913 7 | 0.918 2 |
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 25.34 | 21.93 | 24.95 | 29.92 | 31.25 | 30.43 | 32.93 | 33.23 |
12 | 25.52 | 22.74 | 25.10 | 30.76 | 31.80 | 30.87 | 33.76 | 33.83 |
16 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
表10 不同先验点云指导采样数各场景峰值信噪比
Table 10 Peak signal-to-noise ratio of different sample number from prior point cloud guide
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 25.34 | 21.93 | 24.95 | 29.92 | 31.25 | 30.43 | 32.93 | 33.23 |
12 | 25.52 | 22.74 | 25.10 | 30.76 | 31.80 | 30.87 | 33.76 | 33.83 |
16 | 26.15 | 22.06 | 25.23 | 31.90 | 32.36 | 31.35 | 34.13 | 34.43 |
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 0.802 1 | 0.792 1 | 0.782 1 | 0.934 1 | 0.892 3 | 0.883 6 | 0.889 6 | 0.897 2 |
12 | 0.808 4 | 0.793 4 | 0.783 4 | 0.947 4 | 0.903 5 | 0.890 4 | 0.898 7 | 0.908 4 |
16 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
表11 不同先验点云指导采样数各场景结构相似度
Table 11 Structural similarity of different sample number from prior point cloud guide
Sppg | 场景 | |||||||
---|---|---|---|---|---|---|---|---|
scan_24 | scan_37 | scan_55 | scan_63 | scan_106 | scan_114 | scan_118 | scan_122 | |
8 | 0.802 1 | 0.792 1 | 0.782 1 | 0.934 1 | 0.892 3 | 0.883 6 | 0.889 6 | 0.897 2 |
12 | 0.808 4 | 0.793 4 | 0.783 4 | 0.947 4 | 0.903 5 | 0.890 4 | 0.898 7 | 0.908 4 |
16 | 0.812 3 | 0.797 2 | 0.784 6 | 0.955 3 | 0.906 3 | 0.900 6 | 0.910 8 | 0.917 4 |
[1] | MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106. |
[2] | YANG M D, CHAO C F, HUANG K S, et al. Image-based 3D scene reconstruction and exploration in augmented reality[J]. Automation in Construction, 2013, 33: 48-60. |
[3] | MA Z L, LIU S L. A review of 3D reconstruction techniques in civil engineering and their applications[J]. Advanced Engineering Informatics, 2018, 37: 163-174. |
[4] | LU Y J, WANG S, FAN S S, et al. Image-based 3D reconstruction for multi-scale civil and infrastructure projects: a review from 2012 to 2022 with new perspective from deep learning methods[J]. Advanced Engineering Informatics, 2024, 59: 102268. |
[5] | WANG P, LIU L J, LIU Y, et al. NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction[C]// The 35th International Conference on Neural Information Processing System. Red Hook: Curran Associates Inc., 2021: 2081. |
[6] | LI Z S, MÜLLER T, EVANS A, et al. Neuralangelo: high-fidelity neural surface reconstruction[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 8456-8465. |
[7] | SUN J M, CHEN X, WANG Q Q, et al. Neural 3D reconstruction in the wild[C]// ACM SIGGRAPH 2022 Conference Proceedings. New York: ACM, 2022: 26. |
[8] | HORÉ A, ZIOU D. Image quality metrics:PSNR vs. SSIM[C]// 2010 20th International Conference on Pattern Recognition. New York: IEEE Press, 2010: 2366-2369. |
[9] | DELAUNAY B. Sur la sphere vide[J]. Izv Akad Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk, 1934, 7: 793-800. |
[10] | LEE D T, SCHACHTER B J. Two algorithms for constructing a Delaunay triangulation[J]. International Journal of Computer & Information Sciences, 1980, 9(3): 219-242. |
[11] | BERNARDINI F, MITTLEMAN J, RUSHMEIER H, et al. The ball-pivoting algorithm for surface reconstruction[J]. IEEE Transactions on Visualization and Computer Graphics, 1999, 5(4): 349-359. |
[12] | KAZHDAN M, BOLITHO M, HOPPE H. Poisson surface reconstruction[C]// The 4th Eurographics Symposium on Geometry Processing. Goslar: Eurographics Association, 2006: 61-70. |
[13] | HOPPE H, DEROSE T, DUCHAMP T, et al. Surface reconstruction from unorganized points[C]// The 19th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1992: 71-78. |
[14] | LORENSEN W E, CLINE H E. Marching cubes: a high resolution 3D surface construction algorithm[C]// The 14th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1987: 163-169. |
[15] | SCHÖNBERGER J L, FRAHM J M. Structure-from-motion revisited[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 4104-4113. |
[16] | SCHÖNBERGER J L, ZHENG E L, FRAHM J M, et al. Pixelwise view selection for unstructured multi-view stereo[C]// The 14th European Conference on Computer Vision. Cham: Springer, 2016: 501-518. |
[17] | MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics, 2022, 41(4): 102. |
[18] | LI R L, TANCIK M, KANAZAWA A. NerfAcc: a general NeRF acceleration toolbox[EB/OL]. [2024-03-01]. https://arxiv.org/abs/2210.04847. |
[19] | SUN C, SUN M, CHEN H T. Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5449-5459. |
[20] | FRIDOVICH-KEIL S, YU A, TANCIK M, et al. Plenoxels: radiance fields without neural networks[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5491-5500. |
[21] | GARBIN S J, KOWALSKI M, JOHNSON M, et al. FastNeRF: high-fidelity neural rendering at 200FPS[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 14326-14335. |
[22] | BARRON J T, MILDENHALL B, TANCIK M, et al. Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5835-5844. |
[23] | PARK J J, FLORENCE P, STRAUB J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 165-174. |
[24] | YARIV L, KASTEN Y, MORAN D, et al. Multiview neural surface reconstruction by disentangling geometry and appearance[C]// The 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 2492-2502. |
[25] | WILLIAMS F, GOJCIC Z, KHAMIS S, et al. Neural fields as learnable kernels for 3D reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18479-18489. |
[26] | WILLIAMS F, TRAGER M, BRUNA J, et al. Neural splines: fitting 3D surfaces with infinitely-wide neural networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 9944-9953. |
[27] | HUANG J H, GOJCIC Z, ATZMON M, et al. Neural kernel surface reconstruction[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 4369-4379. |
[28] | WANG Y M, HAN Q, HABERMANN M, et al. NeuS2: fast learning of neural implicit surfaces for multi-view reconstruction[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 3272-3283. |
[29] | JENSEN R, DAHL A, VOGIATZIS G, et al. Large scale multi-view stereopsis evaluation[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2014: 406-413. |
[30] | KINGMA D P, BA J. Adam: a method for stochastic optimization[EB/OL]. [2024-04-01]. https://arxiv.org/abs/1412.6980. |
[31] |
WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
DOI PMID |
[32] | KNAPITSCH A, PARK J, ZHOU Q Y, et al. Tanks and temples: benchmarking large-scale scene reconstruction[J]. ACM Transactions on Graphics, 2017, 36(4): 78. |
[1] | 马扬, 黄璐洁, 彭伟龙, 吴志泽, 唐可可, 方美娥. 基于CLIP语义偏移的三维点云可迁移攻击[J]. 图学学报, 2025, 46(3): 588-601. |
[2] | 刘鸿硕, 白静, 晏浩, 林淦. 面向三维点云的平衡泛化和特化的细粒度分类网络[J]. 图学学报, 2025, 46(3): 602-613. |
[3] | 王昶畅, 江坤, 姜凯, 张鹏, 苏智勇. 基于反馈的迭代采样高噪声点云去噪框架[J]. 图学学报, 2025, 46(3): 614-624. |
[4] | 李治寰, 宁小娟, 吕志勇, 石争浩, 金海燕, 王映辉, 周文明. DEMF-Net:基于双分支增强和多尺度融合的大规模点云语义分割[J]. 图学学报, 2025, 46(2): 259-269. |
[5] | 方程浩, 王康侃. 基于半监督学习的单视角点云三维人体姿态与形状估计[J]. 图学学报, 2025, 46(2): 393-401. |
[6] | 吴亦奇, 何嘉乐, 张甜甜, 张德军, 李艳丽, 陈壹林. 基于多重特征提取和点对应关系的三维点云非刚配准[J]. 图学学报, 2025, 46(1): 150-158. |
[7] | 谢文想, 许威威. 辐射场表面物点引导的主动视图选择[J]. 图学学报, 2025, 46(1): 179-187. |
[8] | 苑朝, 赵明雪, 张丰羿, 冯晓勇, 李冰, 陈瑞. 基于点云特征增强的复杂室内场景3D目标检测[J]. 图学学报, 2025, 46(1): 59-69. |
[9] | 王宗继, 刘云飞, 陆峰. Cloud Sphere: 一种基于渐进式变形自编码的三维模型表征方法[J]. 图学学报, 2024, 45(6): 1375-1388. |
[10] | 王稚儒, 常远, 鲁鹏, 潘成伟. 神经辐射场加速算法综述[J]. 图学学报, 2024, 45(1): 1-13. |
[11] | 王鹏, 辛佩康, 刘寅, 余芳强. 利用最小二乘法的网壳结构点云节点中心坐标提取[J]. 图学学报, 2024, 45(1): 183-190. |
[12] | 韩亚振, 尹梦晓, 马伟钊, 杨诗耕, 胡锦飞, 朱丛洋. DGOA:基于动态图和偏移注意力的点云上采样[J]. 图学学报, 2024, 45(1): 219-229. |
[13] | 周锐闯, 田瑾, 闫丰亭, 朱天晓, 张玉金. 融合外部注意力和图卷积的点云分类模型[J]. 图学学报, 2023, 44(6): 1162-1172. |
[14] | 王可欣, 金映含, 张东亮. 基于深度相机的虚拟眼镜试戴[J]. 图学学报, 2023, 44(5): 988-996. |
[15] | 刘妍, 熊游依, 韩妙妙, 杨龙. 几何特征引导的物体点云模型多层级分割[J]. 图学学报, 2023, 44(4): 755-763. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||