Welcome to Journal of Graphics share: 

Journal of Graphics

Previous Articles     Next Articles

Pose-guided scene-preserving person video generation algorithm

  

  1. (School of Electrical Engineering and Automation, Anhui University, Hefei Anhui 230601, China)
  • Online:2020-08-31 Published:2020-08-22
  • Supported by:
    National Natural Science Foundation of China (61572029); Anhui Outstanding Youth Fund (1908085J25)

Abstract: The person video generation technology learns the feature representation of human body
structure and motion, so as to realize the spatial generation mapping from the feature representation to
the character video frame. In view of the existing person video generation algorithm lacking in the
transformation of background environment and the low accuracy of human pose estimation, a
pose-guided scene-preserving person video generation algorithm was proposed. First, the appropriate
source video and target video were selected, and the video frame with the appearance of the
segmented character served as the network input instead of the source video frame. Then, based on
GAN, a motion transformation model was employed to replace characters in source videos with target
characters and maintain the consistency of motion. Finally, the Poisson image editing was used to
fuse the character appearance with the source background, enabling the flowed advantages: (a)
removing border anomaly pixels; (b) realizing character blending naturally into the source scene; and 
(c) avoiding changing the background environment and overall image style. The proposed algorithm
used the segmented foreground person image instead of the source video frame to reduce background
interference and improve the accuracy of pose estimation, thus naturally realizing scene-preserving
during the motion transfer process and producing artistic and authentic person videos.

Key words: person video generation, pose estimation, motion transfer, generative adversarial
networks,
image processing