[1] |
郭全中, 张金熠. ChatGPT的技术特征与应用前景[J]. 中国传媒科技, 2023(1): 159-160.
|
|
GUO Q Z, ZHANG J Y. Technical characteristics and application prospect of ChatGPT[J]. Media Science and Technology of China, 2023(1): 159-160 (in Chinese).
|
[2] |
王树义, 张庆薇. ChatGPT给科研工作者带来的机遇与挑战[J]. 图书馆论坛, 2023, 43(3): 109-118.
|
|
WANG S Y, ZHANG Q W. ChatGPT's opportunities and challenges for researchers[J]. Library Tribune, 2023, 43(3): 109-118 (in Chinese).
|
[3] |
姜奇平. ChatGPT's大火下的冷思考[J]. 互联网周刊, 2023(4): 6.
|
|
JIANG Q P. Cold thinking under ChatGPT fire[J]. China Internet Week, 2023(4): 6 (in Chinese).
|
[4] |
兰顺正. 在享受ChatGPT的便利时,也要看到其挑战[J]. 世界知识, 2023(6): 72-73.
|
|
LAN S Z. While enjoying the convenience of ChatGPT, we should also see its challenges[J]. World Affairs, 2023(6): 72-73 (in Chinese).
|
[5] |
李砚祖. 传统工艺美术的当代性与地域性: 再谈传统工艺美术的保护与发展[J]. 南京艺术学院学报: 美术与设计版, 2008(1): 5-9.
|
|
LI Y Z. Contemporary and regional characteristics of traditional arts and crafts—on the protection and development of traditional arts and Crafts[J]. Journal of Nanjing Arts Institute: Fine Arts & Design, 2008(1): 5-9 (in Chinese).
|
[6] |
李砚祖. 物质与非物质: 传统工艺美术的保护与发展[J]. 文艺研究, 2006(12): 106-117, 168.
|
|
LI Y Z. Material and immaterial: protection and development of traditional Crafts and fine arts[J]. Literature & Art Studies, 2006(12): 106-117, 168 (in Chinese).
|
[7] |
李砚祖. 传统工艺美术的再发现[J]. 美术观察, 2007(7): 16-17.
|
|
LI Y Z. Rediscovery of traditional arts and Crafts[J]. Art Observation, 2007(7): 16-17 (in Chinese).
|
[8] |
徐艺乙. 当下传统工艺美术的问题与思考[J]. 贵州社会科学, 2014(3): 29-33.
|
|
XU Y Y. Problems and thinking of contemporary traditional arts and Crafts[J]. Guizhou Social Sciences, 2014(3): 29-33 (in Chinese).
|
[9] |
GU G, KO B, GO S, et al. Towards light-weight and real-time line segment detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(1): 726-734.
DOI
URL
|
[10] |
金峰. ChatGPT火爆“出圈”为AI发展添薪助力[J]. 通信世界, 2023(3): 5.
|
|
JIN F. ChatGPT's hot “out of the circle” adds salary to AI development[J]. Communications World, 2023(3): 5 (in Chinese).
|
[11] |
SINGER U, POLYAK A, HAYES T, et al. Make-A-Video: text-to-video generation without text-video data[EB/OL]. [2022-12-20]. https://arxiv.org/abs/2209.14792.
|
[12] |
HONG W Y, DING M, ZHENG W D, et al. CogVideo: large-scale pretraining for text-to-video generation via transformers[EB/OL]. [2022-12-13]. https://arxiv.org/abs/2205.15868.
|
[13] |
FU T J, LI L J, GAN Z, et al. VIOLET: end-to-end video-language transformers with masked visual-token modeling[EB/OL]. [2022-12-23]. https://arxiv.org/abs/2111.12681.
|
[14] |
VILLEGAS R, BABAEIZADEH M, KINDERMANS P J, et al. Phenaki: variable length video generation from open domain textual description[EB/OL]. [2022-12-13]. https://arxiv.org/abs/2210.02399.
|
[15] |
BROOKS T, HOLYNSKI A, EFROS A A. InstructPix2Pix: learning to follow image editing instructions[EB/OL]. [2022-12-23]. https://arxiv.org/abs/2211.09800.
|
[16] |
BAO H B, WANG W H, DONG L, et al. VLMo: unified vision-language pre-training with mixture-of-modality- experts[EB/OL]. [2022-12-23]. https://arxiv.org/abs/2111.02358.
|
[17] |
LI J N, LI D X, XIONG C M, et al. Blip-2: bootstrapping language-image pre-training with frozen image encoders and large language models. [2022-12-23]. https://arxiv.org/abs/2201.12086.
|
[18] |
RAMESH A, PAVLOV M, GOH G, et al. Zero-shot text-to-image generation[EB/OL]. [2022-12-13]. https://arxiv.org/abs/2102.12092.
|
[19] |
KADING C, RODNER E, FREYTAG A, et al. Fine-tuning deep neural networks in continuous learning scenarios[EB/OL]. [2022-12-13]. https://pub.inf-cv.uni-jena.de/pdf/Kaeding16_FDN.pdf.
|
[20] |
朱若琳, 蓝善祯, 朱紫星. 视觉-语言多模态预训练模型前沿进展[J]. 中国传媒大学学报: 自然科学版, 2023, 30(1): 66-74.
|
|
ZHU R L, LAN S Z, ZHU Z X. A survey on vision-language multimodality pre-training[J]. Journal of Communication University of China: Science and Technology, 2023, 30(1): 66-74 (in Chinese).
|