Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2025, Vol. 46 ›› Issue (4): 837-846.DOI: 10.11996/JG.j.2095-302X.2025040837

• Computer Graphics and Virtual Reality • Previous Articles     Next Articles

Adaptive two-hand reconstruction network for monocular visible light environments

LIAO Guoqiong1,2(), HUANG Longjie1, LI Qingxin2, GU Yong3, LI Haibo1,4   

  1. 1. Modern Industry School of Virtual Reality (VR), Jiangxi University of Finance and Economics, Nanchang Jiangxi 330032, China
    2. School of Information Management and Math, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China
    3. School of Software and Internet of Things Engineering, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China
    4. KTH Royal Institute of Technology, Stockholm SE-100 44, Sweden
  • Received:2024-10-12 Revised:2025-02-18 Online:2025-08-30 Published:2025-08-11
  • About author:First author contact:

    LIAO Guoqiong (1969-), professor, Ph.D. His main research interest covers human-computer interaction. E-mail:liaoguoqiong@163.com

  • Supported by:
    Graduate Innovation Special Project in Jiangxi Province(YC2024-S392)

Abstract:

An accurate reconstruction of the hand mesh is a crucial process for a natural human-computer interaction experience, but the task of hand reconstruction remains highly challenging due to factors such as hand occlusion, the complexity of collecting hand interaction data outdoors, and interference in complex lighting environments. Most of the existing work has achieved good results in laboratory and other environments with less interference, but the reconstruction performance in complex lighting scenes remains poor. To solve these problems, an adaptive two-hand reconstruction network was proposed for monocular visible light environments. By introducing a single hand detection frame and using a 2D complex lighting scene dataset for weak supervision, the model can enable generalization to complex lighting scenarios. The designed hand feature interaction module effectively established long-distance dependence relationships between the left and right hand features, alleviating the problem of the single hand detection frame lacking hand interaction information. The designed adaptive fusion strategy effectively integrated interaction features and single hand features, enhancing the robustness of the model. Experimental results demonstrated that the best results were achieved on the HIC dataset, comprising multiple complex lighting scenarios.

Key words: complex lighting, hand mesh, two-hand interaction, weak supervision, feature fusion

CLC Number: