欢迎访问《图学学报》 分享到:

图学学报 ›› 2024, Vol. 45 ›› Issue (3): 539-547.DOI: 10.11996/JG.j.2095-302X.2024030539

• 计算机图形学与虚拟现实 • 上一篇    下一篇

适用于不同款式的无监督服装动画预测

石敏1(), 禚心如1, 孙碧莲1, 韩国庆1, 朱登明2()   

  1. 1.华北电力大学控制与计算机工程学院,北京 102206
    2.中国科学院计算技术研究所前瞻研究实验室,北京 100190
  • 收稿日期:2023-08-22 接受日期:2023-12-10 出版日期:2024-06-30 发布日期:2024-06-11
  • 通讯作者:朱登明(1973-),男,副研究员,博士。主要研究方向为虚拟现实、计算机图形学等。E-mail:mdzhu@ict.ac.cn
  • 第一作者:石敏(1975-),女,副教授,博士。主要研究方向为计算机图形学、虚拟现实等。E-mail:shi_min@ncepu.edu.cn
  • 基金资助:
    国家自然科学基金项目(61972379)

Unsupervised clothing animation prediction for different styles

SHI Min1(), ZHUO Xinru1, SUN Bilian1, HAN Guoqing1, ZHU Dengming2()   

  1. 1. School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
    2. Prospective Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Received:2023-08-22 Accepted:2023-12-10 Published:2024-06-30 Online:2024-06-11
  • First author:SHI Min (1975-), associate professor, Ph.D. Her main research interests cover graphics, virtual reality, etc. E-mail:shi_min@ncepu.edu.cn
  • Supported by:
    National Natural Science Foundation of China(61972379)

摘要:

虚拟角色着装动画生成是三维动画的关键技术,着装变形作为其核心一直是该领域的研究热点。现有着装变形方法大多基于单一服装款式进行研究,一旦款式变化则需要重新训练,耗费时间和增加计算成本。同时现有方法大多基于有监督的方法进行网络训练,需要大量的数据准备和训练成本。基于此,提出一种适用于不同款式的无监督着装动画生成方法。首先,提出一个可学习的款式特征表示方式,学习款式约束的运动隐空间概率分布模型;其次,基于编解码结构搭建款式约束的无监督服装变形预测网络,进一步引入Transformer编解码层对时序运动特征进行提取;最后,进行多款式动画生成实验,并与现有方法基于视觉效果和定量指标进行对比分析。实验结果表明,相较于现有方法,本文方法可以生成款式可调的、视觉合理的着装动画,在预测精度以及穿透损失等方面具有明显的优越性。

关键词: 着装动画, 无监督, 计算机图形学, 服装变形, Transformer

Abstract:

Dressing animation generation is a key technology in 3D animation, with clothing deformation as its core, which has been a focal point of research in this field. Existing clothing deformation methods mostly focus on a single clothing style for research, requiring retraining for style changes, thus consuming time and increasing computational costs. Additionally, most current methods rely on supervised approaches for network training, necessitating extensive data preparation and training expenses. In light of these challenges, an unsupervised clothing animation generation method applicable to different styles was proposed. Firstly, a learnable style feature representation was introduced, capturing the probabilistic distribution model of style-constrained motion latent space. Secondly, an unsupervised clothing deformation prediction network was established with style constraints and grounded in an encoder-decoder architecture. Furthermore, a Transformer encoder-decoder layer was incorporated to extract temporal motion features. Finally, multiple style animation generation experiments were conducted, comparing the proposed method with existing methods in terms of visual effects and quantitative metrics. Experimental results demonstrated that the proposed method can generate visually plausible clothing animations with adjustable styles, outperforming existing methods in prediction accuracy and reducing penetration loss.

Key words: clothing animation, unsupervised, computer graphics, clothing deformation, Transformer

中图分类号: