Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2023, Vol. 44 ›› Issue (2): 260-270.DOI: 10.11996/JG.j.2095-302X.2023020260

Previous Articles     Next Articles

Saliency detection-guided for image data augmentation

ZENG Wu1(), ZHU Heng-liang1, XING Shu-li1, LIN Jiang-hong1, MAO Guo-jun1,2()   

  1. 1. School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou Fujian 350118, China
    2. Fujian Key Laboratory of Big Data Mining and Applications, Fuzhou Fujian 350118, China
  • Received:2022-06-02 Accepted:2022-08-21 Online:2023-04-30 Published:2023-05-01
  • Contact: MAO Guo-jun (1966-), professor, Ph.D. His main research interests cover data mining, big data and distributed computing. E-mail:19662090@fjut.edu.cn
  • About author:ZENG Wu (1997-), master student. His main research interests cover image data augmentation and few-shot learning. E-mail:2201905122@smail.fjut.edu.cn
  • Supported by:
    National Natural Science Foundation of China(61773415);National Key Research and Development Project(2019YFD0900805)

Abstract:

In view of the fact that most data augmentation methods tend to be overly random in their selection of cropped regions, and tend to place too much emphasis on the feature salient regions in the image while neglecting the reinforcement learning of the poorly discriminative regions in the image, the SaliencyOut and SaliencyCutMix methods were proposed to enhance the learning of poorly discriminative regions in images. Specifically, SaliencyOut first employed the saliency detection technology to generate a saliency map of the original image, subsequently identifying a feature salient area in the saliency map and removing the pixels in this area. SaliencyCutMix, on the other hand, removed the cropped area of the original image and replaced it with the same area of the patch image. By occluding or replacing some feature salient areas in the image, the model was guided to learn other features about the target object. In addition, to address the issue of losing too many salient feature regions in the cases of large cropping areas, an adaptive scaling factor was incorporated in the selection of the cropping boundary. This factor enabled the dynamic adjustment of the size of the cropping boundary according to the difference in the initial size of the cropping area boundary. Experimental results on four datasets showed that the proposed method could significantly improve the classification performance and anti-interference ability of the model, surpassing most advanced methods. In particular, in the Mini-ImageNet dataset, when applied to the ResNet-34 network, SaliencyCutMix could improve the Top-1 accuracy by 1.18% compared to CutMix.

Key words: data enhancement, image classification, deep learning, saliency detection, image mixing

CLC Number: