欢迎访问《图学学报》 分享到:

图学学报

• 虚拟现实/增强现实 • 上一篇    下一篇

航天员虚拟交互操作训练多体感融合驱动方法研究

  

  1. 中国航天员科研训练中心,北京 100094
  • 出版日期:2018-08-31 发布日期:2018-08-21
  • 基金资助:
    国家重点实验室基金项目(SYFD160051807)

On Multi-Somatosensory Driven Method for Virtual Interactive Operation Training of Astronaut

  1. Astronaut Centre of China, Beijing 100094, China
  • Online:2018-08-31 Published:2018-08-21

摘要: :针对航天员虚拟训练中的人机自然交互问题,基于体态/手势识别和人体运动特性,
提出一种多通道数据融合的虚拟驱动与交互方法。结合Kinect 设备能够完整识别人体姿态特点
及LeapMotion 设备能精确识别手势姿态的优势,提出了基于判断的数据传递方法,在人体关节
识别的基础上对手部关节进行识别与数据处理计算,采用多通道体感识别融合方法将二者结合,
并进行了实验。结果表明,通过采用LeapMotion 和Kinect 对手部识别的判别,当手势在
LeapMotion 识别范围内,能够在实现人体体感识别的基础上增加较为精确的手势识别。此方法
成功实现了人体姿态识别和手势精确识别的结合,可应用于航天员虚拟训练中的人机自然交互
中去。

关键词: 航天员, 虚拟训练, 体感识别, 数据融合, 交互

Abstract: To solve the problem of human-computer natural interaction in the virtual training of
astronauts, a multi-somatosensory driven method is proposed based on posture / gesture recognition
and human motion characteristics. With the the advantages of Kinect device which can completely
recognize human posture characteristics and LeapMotion device which can accurately identify
gestures, the method of data transfer based on judgement is put forward. Hand joints are recognized
and the related data are processed and calculated on the basis of the recognition of joints of the whole
body. These two are combined by using the multi-somatosensory driven method, and the experiment
is carried out. The results show that by using LeapMotion and Kinect to recognize hand joints, when
the gesture is within the range of LeapMotion recognition, we can add more precise gesture
recognition to the realization of human somatosensory recognition. This method has successfully
realized the combination of human posture recognition and precise gesture recognition, and can be
applied to the human-computer natural interaction in the virtual training of astronauts.

Key words: astronaut, virtual training, somatosensory recognition, data fusion, interaction