[1] |
MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. Nerf: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.
|
[2] |
SCHÖNBERGER J L, FRAHM J M. Structure-from-motion revisited[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 4104-4113.
|
[3] |
GOESELE M, CURLESS B, SEITZ S M. Multi-view stereo revisited[C]// 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2006: 2402-2409.
|
[4] |
常远, 盖孟. 基于神经辐射场的视点合成算法综述[J]. 图学学报, 2021, 42(3): 376-384.
|
|
CHANG Y, GAI M. A review on neural radiance fields based view synthesis[J]. Journal of Graphics, 2021, 42(3): 376-384 (in Chinese).
|
[5] |
董相涛, 马鑫, 潘成伟, 等. 室外大场景神经辐射场综述[J]. 图学学报, 2024, 45(4): 631-649.
DOI
|
|
DONG X T, MA X, PAN C W, et al. A review of neural radiance fields for outdoor large scenes[J]. Journal of Graphics, 2024, 45(4): 631-649 (in Chinese).
DOI
|
[6] |
MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics, 2022, 41(4): 1-15.
|
[7] |
BAJCSY R, ALOIMONOS Y, TSOTSOS J K. Revisiting active perception[J]. Autonomous Robots, 2018, 42(2): 177-196.
DOI
PMID
|
[8] |
LIU M, SHI Y F, ZHENG L T, et al. Recurrent 3D attentional networks for end-to-end active object recognition[J]. Computational Visual Media, 2019, 5(1): 91-104.
|
[9] |
ISLER S, SABZEVARI R, DELMERICO J, et al. An information gain formulation for active volumetric 3D reconstruction[C]// 2016 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2016: 3477-3484.
|
[10] |
BIRCHER A, KAMEL M, ALEXIS K, et al. Receding horizon “next-best-view” planner for 3D exploration[C]// 2016 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2016: 1462-1468.
|
[11] |
ZAENKER T, SMITT C, MCCOOL C, et al. Viewpoint planning for fruit size and position estimation[C]// 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2021: 3271-3277.
|
[12] |
ZENG R, ZHAO W, LIU Y J. PC-NBV: a point cloud based deep network for efficient next best view planning[C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2020: 7050-7057.
|
[13] |
SONG S, JO S. Surface-based exploration for autonomous 3D modeling[C]// 2018 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2018: 4319-4326.
|
[14] |
WU Q Y, MANOCHA D, WANG J, et al. NeoNav: improving the generalization of visual navigation via generating next expected observations[C]// The Thirty-Fourth AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 10001-10008.
|
[15] |
PAN X R, LAI Z H, SONG S J, et al. ActiveNERF: learning where to see with uncertainty estimation[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 230-246.
|
[16] |
JIN L R, CHEN X Y L, RÜCKIN J, et al. NeU-NBV: next best view planning using uncertainty estimation in image-based neural rendering[C]// 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2023: 11305-11312.
|
[17] |
KAJIYA J T, VON HERZEN B P. Ray tracing volume densities[J]. ACM SIGGRAPH Computer Graphics, 1984, 18(3): 165-174.
|
[18] |
SHEN J X, RUIZ A, AGUDO A, et al. Stochastic neural radiance fields: quantifying uncertainty in implicit 3D representations[C]// 2021 International Conference on 3D Vision. New York: IEEE Press, 2021: 972-981.
|
[19] |
MARTIN-BRUALLA R, RADWAN N, SAJJADI M S M, et al. NeRF in the wild: neural radiance fields for unconstrained photo collections[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7210-7219.
|
[20] |
RAN Y L, ZENG J, HE S B, et al. NeurAR: neural uncertainty for autonomous 3D reconstruction with implicit neural representations[J]. IEEE Robotics and Automation Letters, 2023, 8(2): 1125-1132.
|
[21] |
LEE S, CHEN L, WANG J H, et al. Uncertainty guided policy for active robotic 3D reconstruction using neural radiance fields[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 12070-12077.
|
[22] |
ZHAN H Y, ZHENG J Y, XU Y, et al. ActiveRMAP: radiance field for active mapping and planning[EB/OL]. [2024-06-23]https://ar5iv.labs.arxiv.org/html/2211.12656,2022.
|
[23] |
SITZMANN V, ZOLLHÖFER M, WETZSTEIN G. Scene representation networks: continuous 3D-structure-aware neural scene representations[C]// The 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 101.
|
[24] |
MILDENHALL B, SRINIVASAN P P, ORTIZ-CAYON R, et al. Local light field fusion: practical view synthesis with prescriptive sampling guidelines[J]. ACM Transactions on Graphics, 2019, 38(4): 1-14.
|