[1] |
MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.
|
[2] |
HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision[M]. 2nd ed. Cambridge: Cambridge University Press, 2003: 310-324.
|
[3] |
SHUM H Y, CHAN S C, KANG S B. Image-based rendering[M]. New York: Springer, 2007: 1-5.
|
[4] |
KOPANAS G, DRETTAKIS G. Improving NeRF quality by progressive camera placement for free-viewpoint navigation[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2309.00014.
|
[5] |
JIANG W, LEI B S, DANIILIDIS K. FisherRF: active view selection and uncertainty quantification for radiance fields using fisher information[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2311.17874.
|
[6] |
RAN Y L, ZENG J, HE S B, et al. NeurAR: neural uncertainty for autonomous 3D reconstruction with implicit neural representations[J]. IEEE Robotics and Automation Letters, 2023, 8(2): 1125-1132.
|
[7] |
PAN X R, LAI Z H, SONG S J, et al. ActiveNeRF: learning where to see with uncertainty estimation[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 230-246.
|
[8] |
SHEN J X, REN R J, RUIZ A, et al. Estimating 3D uncertainty field: quantifying uncertainty for neural radiance fields[C]// 2024 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2024: 2375-2381.
|
[9] |
MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics (TOG), 2022, 41(4): 102.
|
[10] |
FRIDOVICH-KEIL S, YU A, TANCIK M, et al. Plenoxels: radiance fields without neural networks[C]// 2012 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5491-5500.
|
[11] |
LEE K, GUPTA S, KIM S, et al. SO-NeRF: active view planning for NeRF using surrogate objectives[EB/OL]. [2023-12-06]. https://arxiv.org/abs/2312.03266.
|
[12] |
KENDALL A, GAL Y. What uncertainties do we need in Bayesian deep learning for computer vision?[C]// The 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 5580-5590.
|
[13] |
ZHAN H Y, ZHENG J Y, XU Y, et al. ActiveRMAP: radiance field for active mapping and planning[EB/OL]. [2024-04-12]. https://arxiv.org/abs/2211.12656.
|
[14] |
SHEN J X, RUIZ A, AGUDO A, et al. Stochastic neural radiance fields: quantifying uncertainty in implicit 3D representations[C]// 2021 International Conference on 3D Vision. New York: IEEE Press, 2021: 972-981.
|
[15] |
SHEN J X, AGUDO A, MORENO-NOGUER F, et al. Conditional-flow NeRF: accurate 3D modelling with reliable uncertainty quantification[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 540-557.
|
[16] |
SÜNDERHAUF N, ABOU-CHAKRA J, MILLER D. Density-aware NeRF ensembles: quantifying predictive uncertainty in neural radiance fields[C]// 2023 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2023: 9370-9376.
|
[17] |
GOLI L, READING C, SELLÁN S, et al. Bayes' rays: uncertainty quantification for neural radiance fields[C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2024: 20061-20070.
|
[18] |
SANDSTRÖM E, TA K, VAN GOOL L, et al. UncLe-SLAM: uncertainty learning for dense neural SLAM[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 4539-4550.
|
[19] |
KERBL B, KOPANAS G, LEIMKUEHLER T, et al. 3D Gaussian splatting for real-time radiance field rendering[J]. ACM Transactions on Graphics (TOG), 2023, 42(4): 139.
|
[20] |
WU J K, LIU L M, TAN Y P, et al. ActRay: online active ray sampling for radiance fields[C]// SIGGRAPH Asia 2023 Conference Papers. New York: ACM, 2023: 97.
|
[21] |
WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]// The 37th Asilomar Conference on Signals, Systems & Computers. New York: IEEE Press, 2003: 1398-1402.
|
[22] |
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 586-595.
|
[23] |
DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly-annotated 3d reconstructions of indoor scenes[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 2432-2443.
|
[24] |
ALIEV K A, SEVASTOPOLSKY A, KOLOS M, et al. Neural point-based graphics[C]// The 16th European Conference on Computer Vision. Cham: Springer, 2020: 696-712.
|