Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2024, Vol. 45 ›› Issue (3): 539-547.DOI: 10.11996/JG.j.2095-302X.2024030539

• Computer Graphics and Virtual Reality • Previous Articles     Next Articles

Unsupervised clothing animation prediction for different styles

SHI Min1(), ZHUO Xinru1, SUN Bilian1, HAN Guoqing1, ZHU Dengming2()   

  1. 1. School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
    2. Prospective Research Laboratory, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
  • Received:2023-08-22 Accepted:2023-12-10 Online:2024-06-30 Published:2024-06-11
  • Contact: ZHU Dengming (1973-), associate researcher, Ph.D. His main research interests cover virtual reality, graphics, etc. E-mail:mdzhu@ict.ac.cn
  • About author:

    SHI Min (1975-), associate professor, Ph.D. Her main research interests cover graphics, virtual reality, etc. E-mail:shi_min@ncepu.edu.cn

  • Supported by:
    National Natural Science Foundation of China(61972379)

Abstract:

Dressing animation generation is a key technology in 3D animation, with clothing deformation as its core, which has been a focal point of research in this field. Existing clothing deformation methods mostly focus on a single clothing style for research, requiring retraining for style changes, thus consuming time and increasing computational costs. Additionally, most current methods rely on supervised approaches for network training, necessitating extensive data preparation and training expenses. In light of these challenges, an unsupervised clothing animation generation method applicable to different styles was proposed. Firstly, a learnable style feature representation was introduced, capturing the probabilistic distribution model of style-constrained motion latent space. Secondly, an unsupervised clothing deformation prediction network was established with style constraints and grounded in an encoder-decoder architecture. Furthermore, a Transformer encoder-decoder layer was incorporated to extract temporal motion features. Finally, multiple style animation generation experiments were conducted, comparing the proposed method with existing methods in terms of visual effects and quantitative metrics. Experimental results demonstrated that the proposed method can generate visually plausible clothing animations with adjustable styles, outperforming existing methods in prediction accuracy and reducing penetration loss.

Key words: clothing animation, unsupervised, computer graphics, clothing deformation, Transformer

CLC Number: