[1] |
GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]// The 27th International Conference on Neural Information Processing Systems. New York: ACM, 2014: 2672-2680.
|
[2] |
SUN J X, WANG X, ZHANG Y, et al. Fenerf: face editing in neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 7672-7682.
|
[3] |
MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2003.08934.
|
[4] |
WANG S C, DUAN Y Q, DING H H, et al. Learning transferable human-object interaction detector with natural language supervision[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 939-948.
|
[5] |
SAHARIA C, CHAN W, SAXENA S, et al. Photorealistic text-to-image diffusion models with deep language understanding[C]// The 36th International Conference on Neural Information Processing Systems. New York: ACM, 2022: 36479-36494.
|
[6] |
BROOKS T, HOLYNSKI A, EFROS A A. Instructpix2pix: learning to follow image editing instructions[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 18392-18402.
|
[7] |
KAMATA H, SAKUMA Y, HAYAKAWA A, et al. Instruct 3D-to-3D: text Instruction Guided 3D-to-3D conversion[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2303.15780.
|
[8] |
POOLE B, JAIN A, BARRON J T, et al. DreamFusion: text-to-3D using 2D Diffusion[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2209.14988.
|
[9] |
HAQUE A, TANCIK M, EFROS A A, et al. Instruct- NeRF2NeRF: editing 3D scenes with instructions[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 19740-19750.
|
[10] |
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]// 2018 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2018: 586-595.
|
[11] |
范腾, 杨浩, 尹稳, 等. 基于神经辐射场的多尺度视图合成研究[J]. 图学学报, 2023, 44(6): 1140-1148.
DOI
|
|
FAN T, YANG H, YIN W, et al. Multi-scale view synthesis based on neural radiance field[J]. Journal of Graphics, 2023, 44(6): 1140-1148 (in Chinese).
DOI
|
[12] |
TANCIK M, WEBER E, NG E, et al. Nerfstudio: a modular framework for neural radiance field development[C]// SIGGRAPH '23: ACM SIGGRAPH 2023 Conference Proceedings. New York: ACM, 2023: 1-12.
|
[13] |
BARRON J T, MILDENHALL B, VERBIN D, et al. Mip-nerf 360: unbounded anti-aliased neural radiance fields[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5470-5479.
|
[14] |
岳明宇, 高希峰, 毕重科. 三维建筑模型的低模网格生成[J]. 图学学报, 2023, 44(4): 764-774.
DOI
|
|
YUE M Y, GAO X F, BI C K. 3D low-poly mesh generation for building models[J]. Journal of Graphics, 2023, 44(4): 764-774 (in Chinese).
|
[15] |
KAWAR B, ZADA S, LANG O, et al. Imagic: text-based real image editing with diffusion models[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 6007-6017.
|
[16] |
WINATA G I, MADOTTO A, LIN Z J, et al. Language models are few-shot multilingual learners[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2109.07684.
|
[17] |
TAKAGI Y, NISHIMOTO S. High-resolution image reco-nstruction with latent diffusion models from human brain activity[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 14453-14463.
|
[18] |
BAO C, ZHANG Y, YANG B, et al. Sine: semantic-driven image-based nerf editing with prior-guided editing field[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 20919-20929.
|
[19] |
王吉, 王森, 蒋智文, 等. 基于深度条件扩散模型的零样本文本驱动虚拟人生成方法[J]. 图学学报, 2023, 44(6): 1218-1226.
DOI
|
|
WANG J, WANG S, JIANG Z W, et al. Zero-shot text-driven avatar generation based on depth-conditioned diffusion model[J]. Journal of Graphics, 2023, 44(6): 1218-1226 (in Chinese).
DOI
|
[20] |
WANG Z Y, LU C, WANG Y K, et al. ProlificDreamer: high-fidelity and diverse text-to-3D generation with variational score distillation[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2305.16213.
|
[21] |
BHAT S F, BIRKL R, WOFK D, et al. ZoeDepth: zero-shot transfer by combining relative and metric depth[EB/OL]. [2024-01-19]. http://arxiv.org/abs/2302.12288.
|
[22] |
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
DOI
PMID
|
[23] |
WANG C, JIANG R X, CHAI M L, et al. NeRF-art: text-driven neural radiance fields stylization[EB/OL]. [2024-01-19]. https://arxiv.org/abs/2212.08070.
|