Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2021, Vol. 42 ›› Issue (2): 190-197.DOI: 10.11996/JG.j.2095-302X.2021020190

• Image Processing and Computer Vision • Previous Articles     Next Articles

Style transfer algorithm for salient region preservation 

  

  1. 1. The College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 200234, China; 2. Shanghai Engineering Research Center of Intelligent Education and Bigdata, Shanghai Normal University, Shanghai 200234, China; 3. The Research Base of Online Education for Shanghai Middle and Primary Schools, Shanghai 200234, China; 4. School of Optical-Electrical and Computer engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; 5. School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
  • Online:2021-04-30 Published:2021-04-30
  • Supported by:
    National Natural Science Foundation of China (61775139,62072126,61772164,61872242)

Abstract: Style transfer based on neural networks has become one of the hot research issues in academia and industry in recent years. Existing methods can apply different styles to a given content image to generate a stylized image and greatly enhance visual effects and conversion efficiency. However, these methods focus on learning the underlying features of the image, easily leading to the loss of content image semantic information of stylized images. Based on this, an improved scheme was proposed to match the salient area of the stylized image with that of the content image. By adding a saliency detection network to generate a saliency map of the composite image and the content image, the loss of the saliency map was calculated during the training process, so that the composite image could almost maintain a saliency area consistent with that of the content image, which is conducive to improving the stylized image. The experiment shows that the stylized image generated by the style transfer model can not only produce better visual effects, but also retains the semantic information of the content image. Ensuring the undistorted ness of salient areas is a significant prerequisite for generating visually friendly images, especially for the content image with prominent salient areas. 

Key words: style transfer, image transformation, salient region preservation, convolutional neural network, saliency detection

CLC Number: