Journal of Graphics ›› 2025, Vol. 46 ›› Issue (1): 179-187.DOI: 10.11996/JG.j.2095-302X.2025010179
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
Received:
2024-07-10
Accepted:
2024-10-02
Online:
2025-02-28
Published:
2025-02-14
Contact:
XU Weiwei
About author:
First author contact:XIE Wenxiang (2001-), master student. His main research interests cover neural rendering and reconstruction. E-mail:zju_xwx@zju.edu.cn
Supported by:
CLC Number:
XIE Wenxiang, XU Weiwei. Active view selection for radiance fields using surface object points[J]. Journal of Graphics, 2025, 46(1): 179-187.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025010179
Fig. 1 Algorithm Framework ((a) Train the radiance field model; (b) Acquire the object points from the training rays; (c) Calculate the surface overlap degree for the candidate views; (d) Select k views with the minimum surface overlap)
不同方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
ActiveNeRF+1view | 20.79 | 0.836 | 0.214 |
ActiveNeRF+4views | 19.35 | 0.836 | 0.215 |
本文方法+1view | 24.67 | 0.875 | 0.167 |
本文方法+4views | 25.23 | 0.875 | 0.160 |
Table 1 Comparison of active view selection metrics for different methods in object scenes
不同方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
ActiveNeRF+1view | 20.79 | 0.836 | 0.214 |
ActiveNeRF+4views | 19.35 | 0.836 | 0.215 |
本文方法+1view | 24.67 | 0.875 | 0.167 |
本文方法+4views | 25.23 | 0.875 | 0.160 |
Blender场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
Chair | 22.31 | 28.31 | 0.885 | 0.931 | 0.213 | 0.087 | 271.4 | 7.2 |
Drums | 17.19 | 19.36 | 0.815 | 0.811 | 0.255 | 0.243 | 277.7 | 7.6 |
Hotdog | 19.54 | 30.99 | 0.893 | 0.948 | 0.181 | 0.144 | 278.1 | 7.4 |
Lego | 19.40 | 26.91 | 0.837 | 0.910 | 0.196 | 0.104 | 278.9 | 9.0 |
Materials | 15.26 | 22.20 | 0.828 | 0.872 | 0.169 | 0.123 | 280.1 | 8.6 |
Ship | 22.38 | 23.61 | 0.760 | 0.776 | 0.274 | 0.256 | 278.0 | 13.3 |
平均 | 19.35 | 25.23 | 0.836 | 0.875 | 0.215 | 0.160 | 277.4 | 8.9 |
Table 2 Comparison of the next batch of best views selection metrics in object scenes
Blender场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
Chair | 22.31 | 28.31 | 0.885 | 0.931 | 0.213 | 0.087 | 271.4 | 7.2 |
Drums | 17.19 | 19.36 | 0.815 | 0.811 | 0.255 | 0.243 | 277.7 | 7.6 |
Hotdog | 19.54 | 30.99 | 0.893 | 0.948 | 0.181 | 0.144 | 278.1 | 7.4 |
Lego | 19.40 | 26.91 | 0.837 | 0.910 | 0.196 | 0.104 | 278.9 | 9.0 |
Materials | 15.26 | 22.20 | 0.828 | 0.872 | 0.169 | 0.123 | 280.1 | 8.6 |
Ship | 22.38 | 23.61 | 0.760 | 0.776 | 0.274 | 0.256 | 278.0 | 13.3 |
平均 | 19.35 | 25.23 | 0.836 | 0.875 | 0.215 | 0.160 | 277.4 | 8.9 |
Blender场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
Chair | 19.71 | 26.61 | 0.852 | 0.918 | 0.225 | 0.150 | 265.5 | 7.6 |
Drums | 16.74 | 19.88 | 0.796 | 0.843 | 0.275 | 0.204 | 266.5 | 8.8 |
Hotdog | 28.83 | 29.06 | 0.935 | 0.937 | 0.130 | 0.144 | 274.9 | 8.5 |
Lego | 19.67 | 26.73 | 0.837 | 0.907 | 0.204 | 0.109 | 274.4 | 10.4 |
Materials | 18.43 | 21.78 | 0.851 | 0.864 | 0.165 | 0.137 | 274.4 | 8.2 |
Ship | 21.33 | 23.98 | 0.743 | 0.780 | 0.287 | 0.255 | 275.2 | 15.4 |
平均 | 20.79 | 24.67 | 0.836 | 0.875 | 0.214 | 0.167 | 271.8 | 9.8 |
Table 3 Comparison of the next best view selection metrics in object scenes
Blender场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
Chair | 19.71 | 26.61 | 0.852 | 0.918 | 0.225 | 0.150 | 265.5 | 7.6 |
Drums | 16.74 | 19.88 | 0.796 | 0.843 | 0.275 | 0.204 | 266.5 | 8.8 |
Hotdog | 28.83 | 29.06 | 0.935 | 0.937 | 0.130 | 0.144 | 274.9 | 8.5 |
Lego | 19.67 | 26.73 | 0.837 | 0.907 | 0.204 | 0.109 | 274.4 | 10.4 |
Materials | 18.43 | 21.78 | 0.851 | 0.864 | 0.165 | 0.137 | 274.4 | 8.2 |
Ship | 21.33 | 23.98 | 0.743 | 0.780 | 0.287 | 0.255 | 275.2 | 15.4 |
平均 | 20.79 | 24.67 | 0.836 | 0.875 | 0.214 | 0.167 | 271.8 | 9.8 |
ScanNet场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
scene0100_00 | 20.91 | 24.03 | 0.719 | 0.747 | 0.520 | 0.499 | 406.4 | 24.9 |
scene0101_04 | 14.97 | 18.78 | 0.517 | 0.549 | 0.630 | 0.565 | 405.6 | 17.7 |
scene0105_01 | 18.79 | 22.51 | 0.694 | 0.739 | 0.490 | 0.460 | 462.5 | 36.2 |
scene0120_00 | 14.43 | 18.35 | 0.617 | 0.672 | 0.636 | 0.568 | 374.8 | 26.1 |
平均 | 17.28 | 20.92 | 0.637 | 0.677 | 0.569 | 0.525 | 412.3 | 26.2 |
Table 4 Comparison of the next batch of best views selection metrics in indoor scenes
ScanNet场景 | PSNR↑ | SSIM↑ | LPIPS↓ | Active Time/s↓ | ||||
---|---|---|---|---|---|---|---|---|
ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | ActiveNeRF | 本文方法 | |
scene0100_00 | 20.91 | 24.03 | 0.719 | 0.747 | 0.520 | 0.499 | 406.4 | 24.9 |
scene0101_04 | 14.97 | 18.78 | 0.517 | 0.549 | 0.630 | 0.565 | 405.6 | 17.7 |
scene0105_01 | 18.79 | 22.51 | 0.694 | 0.739 | 0.490 | 0.460 | 462.5 | 36.2 |
scene0120_00 | 14.43 | 18.35 | 0.617 | 0.672 | 0.636 | 0.568 | 374.8 | 26.1 |
平均 | 17.28 | 20.92 | 0.637 | 0.677 | 0.569 | 0.525 | 412.3 | 26.2 |
[1] | MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106. |
[2] | HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision[M]. 2nd ed. Cambridge: Cambridge University Press, 2003: 310-324. |
[3] | SHUM H Y, CHAN S C, KANG S B. Image-based rendering[M]. New York: Springer, 2007: 1-5. |
[4] | KOPANAS G, DRETTAKIS G. Improving NeRF quality by progressive camera placement for free-viewpoint navigation[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2309.00014. |
[5] | JIANG W, LEI B S, DANIILIDIS K. FisherRF: active view selection and uncertainty quantification for radiance fields using fisher information[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2311.17874. |
[6] | RAN Y L, ZENG J, HE S B, et al. NeurAR: neural uncertainty for autonomous 3D reconstruction with implicit neural representations[J]. IEEE Robotics and Automation Letters, 2023, 8(2): 1125-1132. |
[7] | PAN X R, LAI Z H, SONG S J, et al. ActiveNeRF: learning where to see with uncertainty estimation[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 230-246. |
[8] | SHEN J X, REN R J, RUIZ A, et al. Estimating 3D uncertainty field: quantifying uncertainty for neural radiance fields[C]// 2024 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2024: 2375-2381. |
[9] | MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics (TOG), 2022, 41(4): 102. |
[10] | FRIDOVICH-KEIL S, YU A, TANCIK M, et al. Plenoxels: radiance fields without neural networks[C]// 2012 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5491-5500. |
[11] | LEE K, GUPTA S, KIM S, et al. SO-NeRF: active view planning for NeRF using surrogate objectives[EB/OL]. [2023-12-06]. https://arxiv.org/abs/2312.03266. |
[12] | KENDALL A, GAL Y. What uncertainties do we need in Bayesian deep learning for computer vision?[C]// The 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 5580-5590. |
[13] | ZHAN H Y, ZHENG J Y, XU Y, et al. ActiveRMAP: radiance field for active mapping and planning[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2211.12656. |
[14] | SHEN J X, RUIZ A, AGUDO A, et al. Stochastic neural radiance fields: quantifying uncertainty in implicit 3D representations[C]// 2021 International Conference on 3D Vision. New York: IEEE Press, 2021: 972-981. |
[15] | SHEN J X, AGUDO A, MORENO-NOGUER F, et al. Conditional-flow NeRF: accurate 3D modelling with reliable uncertainty quantification[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 540-557. |
[16] | SÜNDERHAUF N, ABOU-CHAKRA J, MILLER D. Density-aware NeRF ensembles: quantifying predictive uncertainty in neural radiance fields[C]// 2023 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2023: 9370-9376. |
[17] | GOLI L, READING C, SELLÁN S, et al. Bayes' rays: uncertainty quantification for neural radiance fields[C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2024: 20061-20070. |
[18] | SANDSTRÖM E, TA K, VAN GOOL L, et al. UncLe-SLAM: uncertainty learning for dense neural SLAM[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 4539-4550. |
[19] | KERBL B, KOPANAS G, LEIMKUEHLER T, et al. 3D Gaussian splatting for real-time radiance field rendering[J]. ACM Transactions on Graphics (TOG), 2023, 42(4): 139. |
[20] | WU J K, LIU L M, TAN Y P, et al. ActRay: online active ray sampling for radiance fields[C]// SIGGRAPH Asia 2023 Conference Papers. New York: ACM, 2023: 97. |
[21] | WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]// The 37th Asilomar Conference on Signals, Systems & Computers. New York: IEEE Press, 2003: 1398-1402. |
[22] | ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 586-595. |
[23] | DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly-annotated 3d reconstructions of indoor scenes[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 2432-2443. |
[24] | ALIEV K A, SEVASTOPOLSKY A, KOLOS M, et al. Neural point-based graphics[C]// The 16th European Conference on Computer Vision. Cham: Springer, 2020: 696-712. |
[1] | DONG Xiangtao, MA Xin, PAN Chengwei, LU Peng. A review of neural radiance fields for outdoor large scenes [J]. Journal of Graphics, 2024, 45(4): 631-649. |
[2] | WANG Zhiru, CHANG Yuan, LU Peng, PAN Chengwei. A review on neural radiance fields acceleration [J]. Journal of Graphics, 2024, 45(1): 1-13. |
[3] | CHENG Huan, WANG Shuo, LI Meng, QIN Lun-ming, ZHAO Fang. A review of neural radiance field for autonomous driving scene [J]. Journal of Graphics, 2023, 44(6): 1091-1103. |
[4] | FAN Teng, YANG Hao, YIN Wen, ZHOU Dong-ming. Multi-scale view synthesis based on neural radiance field [J]. Journal of Graphics, 2023, 44(6): 1140-1148. |
[5] | CHANG Yuan, GAI Meng. A review on neural radiance fields based view synthesis [J]. Journal of Graphics, 2021, 42(3): 376-384. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||