Journal of Graphics ›› 2024, Vol. 45 ›› Issue (4): 631-649.DOI: 10.11996/JG.j.2095-302X.2024040631
• Review • Previous Articles Next Articles
DONG Xiangtao1(), MA Xin1, PAN Chengwei2, LU Peng1(
)
Received:
2023-11-22
Accepted:
2024-02-03
Online:
2024-08-31
Published:
2024-09-02
Contact:
LU Peng
About author:
First author contact:DONG Xiangtao (1998-), master student. His main research interests cover computer vision and 3D reconstruction. E-mail:dxt185@bupt.edu.cn
CLC Number:
DONG Xiangtao, MA Xin, PAN Chengwei, LU Peng. A review of neural radiance fields for outdoor large scenes[J]. Journal of Graphics, 2024, 45(4): 631-649.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2024040631
难点 | 代表文献 | 方法分类 |
---|---|---|
无界问题 | 文献[50,80] | 引入反向球体参数化 |
文献[81-82] | 引入非线性场景参数化 | |
文献[83] | 引入透视转换 | |
大场景建 模问题 | 文献[1,50,84] | 引入分解思想 |
文献[85] | 应用网格化 | |
文献[2,86] | 使用了深度监督思想 | |
场景外观与 动态物体问题 | 文献[87-88] | 建模外观 |
文献[44-46] | 对动态物体进行建模 | |
模型的 泛化性 | 文献[28-29,47] | 引入卷积神经网络 |
文献[48-49] | 引入极线信息 |
Table 1 Summary of NeRF research work in outdoor large scenes
难点 | 代表文献 | 方法分类 |
---|---|---|
无界问题 | 文献[50,80] | 引入反向球体参数化 |
文献[81-82] | 引入非线性场景参数化 | |
文献[83] | 引入透视转换 | |
大场景建 模问题 | 文献[1,50,84] | 引入分解思想 |
文献[85] | 应用网格化 | |
文献[2,86] | 使用了深度监督思想 | |
场景外观与 动态物体问题 | 文献[87-88] | 建模外观 |
文献[44-46] | 对动态物体进行建模 | |
模型的 泛化性 | 文献[28-29,47] | 引入卷积神经网络 |
文献[48-49] | 引入极线信息 |
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
NeRF++ | 23.47 | 0.603 | 0.499 |
Mip-NeRF360 | 27.01 | 0.766 | 0.295 |
Mip-NeRF360(short) | 22.04 | 0.537 | 0.586 |
MERF | 23.19 | 0.616 | 0.343 |
F2-NeRF | 26.32 | 0.779 | 0.276 |
Table 2 Performance of three methods for solving unbounded problems on a outdoor dataset[82-83]
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
NeRF++ | 23.47 | 0.603 | 0.499 |
Mip-NeRF360 | 27.01 | 0.766 | 0.295 |
Mip-NeRF360(short) | 22.04 | 0.537 | 0.586 |
MERF | 23.19 | 0.616 | 0.343 |
F2-NeRF | 26.32 | 0.779 | 0.276 |
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
Mega-NeRF | 20.93 | 0.547 | 0.504 |
Switch-NeRF | 21.54 | 0.579 | 0.474 |
Table 3 Performance comparison of Mega-NeRF and Switch-NeRF on aerial building datasets[84]
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
Mega-NeRF | 20.93 | 0.547 | 0.504 |
Switch-NeRF | 21.54 | 0.579 | 0.474 |
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
Mega-NeRF | 23.42 | 0.537 | 0.618 |
Switch-NeRF | 23.62 | 0.541 | 0.609 |
Table 4 Performance comparison of Mega-NeRF and Switch-NeRF on aerial campus datasets[84]
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
Mega-NeRF | 23.42 | 0.537 | 0.618 |
Switch-NeRF | 23.62 | 0.541 | 0.609 |
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
S-NeRF | 23.60 | 0.743 | 0.422 |
Urban-NeRF | 17.80 | 0.494 | 0.701 |
Table 5 Comparison of the performance of S-NeRF and Urban-NeRF on the Waymo dataset[86]
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
S-NeRF | 23.60 | 0.743 | 0.422 |
Urban-NeRF | 17.80 | 0.494 | 0.701 |
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
PixelNeRF | 19.31 | 0.789 | 0.671 |
IBRNet | 26.04 | 0.917 | 0.190 |
MVSNeRF | 26.63 | 0.931 | 0.168 |
EVE-NeRF | 27.80 | 0.937 | 0.149 |
PBNR | 28.50 | 0.932 | 0.167 |
Table 6 Comparison of performance of five generalization methods on the DTU dataset[48-49]
方法 | PSNR↑ | SSIM↑ | LPIPS↓ |
---|---|---|---|
PixelNeRF | 19.31 | 0.789 | 0.671 |
IBRNet | 26.04 | 0.917 | 0.190 |
MVSNeRF | 26.63 | 0.931 | 0.168 |
EVE-NeRF | 27.80 | 0.937 | 0.149 |
PBNR | 28.50 | 0.932 | 0.167 |
[1] | TANCIK M, CASSER V, YAN X C, et al. Block-NeRF: scalable large scene neural view synthesis[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 8238-8248. |
[2] | REMATAS K, LIU A, SRINIVASAN P, et al. Urban radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12922-12932. |
[3] | LI Z P, LI L, ZHU J K. READ: large-scale neural scene rendering for autonomous driving[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(2): 1522-1529. |
[4] | ZHU Z H, PENG S Y, LARSSON V, et al. NICE-SLAM: neural implicit scalable encoding for SLAM[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12776-12786. |
[5] | MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[M]//Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 405-421. |
[6] | CHEN Y B, LIU S F, WANG X L. Learning continuous image representation with local implicit image function[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 8624-8634. |
[7] | XU X Q, WANG Z Y, SHI H. UltraSR: spatial encoding is a missing key for implicit image function-based arbitrary-scale super-resolution[EB/OL]. [2023-09-10]. https://arxiv.org/abs/2103.12716. |
[8] | SKOROKHODOV I, IGNATYEV S, ELHOSEINY M. Adversarial generation of continuous images[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 10748-10759. |
[9] | SHAHAM T R, GHARBI M, ZHANG R, et al. Spatially-adaptive pixelwise networks for fast image translation[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14877-14886. |
[10] | CHEN Z Q, ZHANG H. Learning implicit fields for generative shape modeling[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 5932-5941. |
[11] | MESCHEDER L, OECHSLE M, NIEMEYER M, et al. Occupancy networks: learning 3D reconstruction in function space[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 4455-4465. |
[12] | PARK J J, FLORENCE P, STRAUB J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 165-174. |
[13] | SUBAKAN C, RAVANELLI M, CORNELL S, et al. Attention is all you need in speech separation[C]// ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing. New York: IEEE Press, 2021: 21-25. |
[14] | BARRON J T, MILDENHALL B, TANCIK M, et al. Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5835-5844. |
[15] | VERBIN D, HEDMAN P, MILDENHALL B, et al. Ref-NeRF: structured view-dependent appearance for neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5481-5490. |
[16] | PARK K, SINHA U, HEDMAN P, et al. HyperNeRF: a higher-dimensional representation for topologically varying neural radiance fields[J]. ACM Transactions on Graphics, 40(6): 238:1-238:12. |
[17] | MILDENHALL B, HEDMAN P, MARTIN-BRUALLA R, et al. NeRF in the dark: high dynamic range view synthesis from noisy raw images[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 16169-16178. |
[18] | LIU L J, GU J T, LIN K Z, et al. Neural sparse voxel fields[EB/OL]. [2023-09-10]. https://arxiv.org/abs/2007.11571. |
[19] | LINDELL D B, MARTEL J N P, WETZSTEIN G. AutoInt: automatic integration for fast neural volume rendering[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14551-14560. |
[20] | MÜLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics, 2022, 41(4): 1-15. |
[21] | HEDMAN P, SRINIVASAN P P, MILDENHALL B, et al. Baking neural radiance fields for real-time view synthesis[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5855-5864. |
[22] | YU A, LI R L, TANCIK M, et al. PlenOctrees for real-time rendering of neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5732-5741. |
[23] | GARBIN S J, KOWALSKI M, JOHNSON M, et al. FastNeRF: high-fidelity neural rendering at 200FPS[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 14326-14335. |
[24] | REISER C, PENG S Y, LIAO Y Y, et al. KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 14315-14325. |
[25] | FRIDOVICH-KEIL S, YU A, TANCIK M, et al. Plenoxels: radiance fields without neural networks[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5491-5500. |
[26] | SUN C, SUN M, CHEN H T. Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5449-5459. |
[27] | CHEN A P, XU Z X, GEIGER A, et al. TensoRF: tensorial radiance fields[C]// The 17th European Conference on Computer Vision. Cham: Springer2022: 333-350. |
[28] | CHEN A P, XU Z X, ZHAO F Q, et al. MVSNeRF: fast generalizable radiance field reconstruction from multi-view stereo[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 14104-14113. |
[29] | YU A, YE V, TANCIK M, et al. pixelNeRF: neural radiance fields from one or few images[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4576-4585. |
[30] | LIU Y, PENG S D, LIU L J, et al. Neural rays for occlusion-aware image-based rendering[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 7814-7823. |
[31] | JAIN A, TANCIK M, ABBEEL P. Putting NeRF on a diet: semantically consistent few-shot view synthesis[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5865-5874. |
[32] | DENG K L, LIU A, ZHU J Y, et al. Depth-supervised NeRF: fewer views and faster training for free[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12872-12881. |
[33] | NIEMEYER M, GEIGER A. GIRAFFE: representing scenes as compositional generative neural feature fields[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 11448-11459. |
[34] | SCHWARZ K, LIAO Y Y, NIEMEYER M, et al. GRAF: generative radiance fields for 3D-aware image synthesis[EB/OL]. [2023-09-10]. https://arxiv.org/abs/2007.02442. |
[35] | MENG Q, CHEN A P, LUO H M, et al. GNeRF: GAN-based neural radiance field without posed camera[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 6331-6341. |
[36] | KOSIOREK A R, STRATHMANN H, ZORAN D, et al. NeRF-VAE: a geometry aware 3D scene generative model[EB/OL]. [2023-09-10]. http://arxiv.org/abs/2104.00587. |
[37] | LIU S, ZHANG X M, ZHANG Z T, et al. Editing conditional radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5753-5763. |
[38] | WANG C, CHAI M L, HE M M, et al. CLIP-NeRF: text-and-image driven manipulation of neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 3825-3834. |
[39] | WANG R B, ZHANG S, HUANG P, et al. Semantic is enough: only semantic information for NeRF reconstruction[C]// 2023 IEEE International Conference on Unmanned Systems. New York: IEEE Press, 2023: 906-912. |
[40] | KUNDU A, GENOVA K, YIN X Q, et al. Panoptic neural fields: a semantic object-aware neural scene representation[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12861-12871. |
[41] | SUCAR E, LIU S K, ORTIZ J, et al. iMAP: implicit mapping and positioning in real-time[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 6209-6218. |
[42] | LIN C H, MA W C, TORRALBA A, et al. BARF: bundle-adjusting neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5721-5731. |
[43] | JEONG Y, AHN S, CHOY C, et al. Self-calibrating neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5826-5834. |
[44] | PUMAROLA A, CORONA E, PONS-MOLL G, et al. D-NeRF: neural radiance fields for dynamic scenes[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 10313-10322. |
[45] | FANG J M, YI T R, WANG X G, et al. Fast dynamic radiance fields with time-aware neural voxels[EB/OL]. [2023-09-10]. https://arxiv.org/pdf/2205.15285.pdf.. |
[46] | TURKI H, ZHANG J Y, FERRONI F, et al. SUDS: scalable urban dynamic scenes[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 12375-12385. |
[47] | WANG Q Q, WANG Z C, GENOVA K, et al. IBRNet: learning multi-view image-based rendering[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4688-4697. |
[48] | SUHAIL M, ESTEVES C, SIGAL L, et al. Generalizable patch-based neural rendering[EB/OL]. [2023-09-10]. http://arxiv.org/abs/2207.10662. |
[49] | MIN Z Y, LUO Y W, YANG W, et al. Entangled view-epipolar information aggregation for generalizable neural radiance fields[EB/OL]. [2023-09-10]. https://arxiv.org/abs/2311.11845. |
[50] | TURKI H, RAMANAN D, SATYANARAYANAN M. Mega-NeRF: scalable construction of large-scale NeRFs for virtual fly- throughs[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12912-12921. |
[51] | XIANGLI Y B, XU L N, PAN X G, et al. BungeeNeRF: progressive neural radiance field for extreme multi-scale scene rendering[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 106-122. |
[52] | JANG W, AGAPITO L. CodeNeRF: disentangled neural radiance fields for object categories[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 12929-12938. |
[53] | KANIA K, YI K M, KOWALSKI M, et al. CoNeRF: controllable neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18602-18611. |
[54] | XIE C, PARK K, MARTIN-BRUALLA R, et al. FiG-NeRF: figure-ground neural radiance fields for 3D object category modelling[C]// 2021 International Conference on 3D Vision (3DV). New York: IEEE Press, 2021: 962-971. |
[55] | MA L, LI X Y, LIAO J, et al. Deblur-NeRF: neural radiance fields from blurry images[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12851-12860. |
[56] | HUANG X, ZHANG Q, FENG Y, et al. HDR-NeRF: high dynamic range neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18377-18387. |
[57] | LI Z S, MÜLLER T, EVANS A, et al. Neuralangelo: high-fidelity neural surface reconstruction[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 8456-8465. |
[58] | AZINOVIĆ D, MARTIN-BRUALLA R, GOLDMAN D B, et al. Neural RGB-D surface reconstruction[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 6280-6291. |
[59] | OECHSLE M, PENG S Y, GEIGER A. UNISURF: unifying neural implicit surfaces and radiance fields for multi-view reconstruction[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5569-5579. |
[60] | PARK K, SINHA U, BARRON J T, et al. Nerfies: deformable neural radiance fields[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5845-5854. |
[61] | HONG Y, PENG B, XIAO H Y, et al. HeadNeRF: a realtime NeRF-based parametric head model[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 20342-20352. |
[62] | PENG S D, ZHANG Y Q, XU Y H, et al. Neural body: implicit neural representations with structured latent codes for novel view synthesis of dynamic humans[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 9050-9059. |
[63] | WENG C, CURLESS B, SRINIVASAN P P, et al. HumanNeRF: free-viewpoint rendering of moving people from monocular video[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 16189-16199. |
[64] | SHAO R Z, ZHANG H W, ZHANG H, et al. DoubleField: bridging the neural surface and radiance fields for high-fidelity human reconstruction and rendering[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 15851-15861. |
[65] | ZHI Y H, QIAN S H, YAN X H, et al. Dual-space NeRF: learning animatable avatars and scene lighting in separate spaces[C]// 2022 International Conference on 3D Vision. New York: IEEE Press, 2022: 1-10. |
[66] | GAO K, GAO Y N, HE H J, et al. NeRF: neural radiance field in 3D vision, a comprehensive review[EB/OL]. [2023-09-10]. http://arxiv.org/abs/2210.00379. |
[67] | TEWARI A, FRIED O, THIES J, et al. Advances in neural rendering[C]// SIGGRAPH '21:Special Interest Group on Computer Graphics and Interactive Techniques. New York: ACM, 2021: 1-320. |
[68] | MITTAL A. Neural radiance fields: past, present, and future[EB/OL]. [2023-09-10]. https://arxiv.org/abs/2304.10050v1. |
[69] |
成欢, 王硕, 李孟, 等. 面向自动驾驶场景的神经辐射场综述[J]. 图学学报, 2023, 44(6): 1091-1103.
DOI |
CHENG H, WANG S, LI M, et al. A review of neural radiance field for autonomous driving scene[J]. Journal of Graphics, 2023, 44(6): 1091-1103 (in Chinese).
DOI |
|
[70] | 常远, 盖孟. 基于神经辐射场的视点合成算法综述[J]. 图学学报, 2021, 42(3): 376-384. |
CHANG Y, GAI M. A review on neural radiance fields based view synthesis[J]. Journal of Graphics, 2021, 42(3): 376-384 (in Chinese). | |
[71] | MILDENHALL B, SRINIVASAN P P, ORTIZ-CAYON R, et al. Local light field fusion: practical view synthesis with prescriptive sampling guidelines[J]. ACM Transactions on Graphics, 38(4): 29:1-29:14. |
[72] | DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly-annotated 3D reconstructions of indoor scenes[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 2432-2443. |
[73] | JENSEN R, DAHL A, VOGIATZIS G, et al. Large scale multi-view stereopsis evaluation[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2014: 406-413. |
[74] | KNAPITSCH A, PARK J, ZHOU Q Y, et al. Tanks and temples: benchmarking large-scale scene reconstruction[J]. ACM Transactions on Graphics, 36(4): 78:1-78:13. |
[75] | GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. International Journal of Robotics Research, 2013, 32(11): 1231-1237. |
[76] | SUN P, KRETZSCHMAR H, DOTIWALLA X, et al. Scalability in perception for autonomous driving: waymo open dataset[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 2443-2451. |
[77] | CAESAR H, BANKITI V, LANG A H, et al. nuScenes: a multimodal dataset for autonomous driving[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 11618-11628. |
[78] | LI Y X, JIANG L H, XU L N, et al. MatrixCity: a large-scale city dataset for city-scale neural rendering and beyond[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 3182-3192. |
[79] | LU C S, YIN F K, CHEN X, et al. A large-scale outdoor multi-modal dataset and benchmark for novel view synthesis and implicit scene reconstruction[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 7523-7533. |
[80] | ZHANG K, RIEGLER G, SNAVELY N, et al. NeRF++: analyzing and improving neural radiance fields[EB/OL]. [2023-09-10]. http://arxiv.org/abs/2010.07492. |
[81] | BARRON J T, MILDENHALL B, VERBIN D, et al. Mip-NeRF 360: unbounded anti-aliased neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5460-5469. |
[82] | REISER C, SZELISKI R, VERBIN D, et al. MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes[J]. ACM Transactions on Graphics, 42(4): 89. |
[83] | WANG P, LIU Y, CHEN Z X, et al. F2-NeRF: fast neural radiance field training with free camera trajectories[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 4150-4159. |
[84] | MI Z X, XU D. Switch-NeRF: learning scene decomposition with mixture of experts for large-scale neural radiance fields[EB/OL]. [2023-09-10]. https://openreview.net/pdf?id=PQ2zoIZqvm. |
[85] | XU L N, XIANGLI Y B, PENG S D, et al. Grid-guided neural radiance fields for large urban scenes[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 8296-8306. |
[86] | XIE Z Y, ZHANG J G, LI W Y, et al. S-NeRF: neural radiance fields for street views[EB/OL]. [2023-09-10]. http://arxiv.org/abs/2303.00749. |
[87] | MARTIN-BRUALLA R, RADWAN N, SAJJADI M S M, et al. NeRF in the wild: neural radiance fields for unconstrained photo collections[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7206-7215. |
[88] | RUDNEV V, ELGHARIB M, SMITH W, et al. NeRF for outdoor scene relighting[C]// European Conference on Computer Vision. Cham: Springer, 2022: 615-631. |
[89] | KARNEWAR A, RITSCHEL T, WANG O, et al. ReLU fields: the little non-linearity that could[C]// SIGGRAPH’22:Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings. New York: ACM, 2022: 27:1-27:9. |
[90] | SEHGAL S, SINGH H, AGARWAL M, et al. Data analysis using principal component analysis[C]// 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems. New York: IEEE Press, 2014: 45-48. |
[91] | KERBL B, KOPANAS G, LEIMKUEHLER T, et al. 3D Gaussian splatting for real-time radiance field rendering[J]. ACM Transactions on Graphics, 42(4): 139:1-139:14. |
[92] | PARK J, JOO K, HU Z, et al. Non-local spatial propagation network for depth completion[C]// The 16th European Conference on Computer Vision. Cham: Springer, 2020: 120-136. |
[1] | WANG Zhiru, CHANG Yuan, LU Peng, PAN Chengwei. A review on neural radiance fields acceleration [J]. Journal of Graphics, 2024, 45(1): 1-13. |
[2] | CHENG Huan, WANG Shuo, LI Meng, QIN Lun-ming, ZHAO Fang. A review of neural radiance field for autonomous driving scene [J]. Journal of Graphics, 2023, 44(6): 1091-1103. |
[3] | FAN Teng, YANG Hao, YIN Wen, ZHOU Dong-ming. Multi-scale view synthesis based on neural radiance field [J]. Journal of Graphics, 2023, 44(6): 1140-1148. |
[4] | CHANG Yuan, GAI Meng. A review on neural radiance fields based view synthesis [J]. Journal of Graphics, 2021, 42(3): 376-384. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||