Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2023, Vol. 44 ›› Issue (4): 739-746.DOI: 10.11996/JG.j.2095-302X.2023040739

Previous Articles     Next Articles

Image feature matching based on repeatability and specificity constraints

GUO Yin-hong(), WANG Li-chun(), LI Shuang   

  1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
  • Received:2022-11-28 Accepted:2023-04-06 Online:2023-08-31 Published:2023-08-16
  • Contact: Wang Li-chun (1975-), professo, Ph.D. Her main research interests cover computer vision and human-computer interaction, etc. E-mail:wanglc@bjut.edu.cn
  • About author:

    GUO Yin-hong (1997-), master student. His main research interest covers computer vision. E-mail:gyh20200216@163.com

  • Supported by:
    Science and Technology Innovation 2030 - “New Generation of Artificial Intelligence” Major Project(2021ZD0111902);National Natural Science Foundation of China(U21B2038);National Natural Science Foundation of China(61876012);National Natural Science Foundation of China(62172022);Foundation for China University Industry-University Research Innovation(2021JQR023)

Abstract:

Image feature matching ascertains whether a pair of pixels can be matched by comparing their distance in the feature space. Therefore, how to learn robust pixel features constitutes one of the primary concerns in the field of image feature matching based on deep learning. In addition, the learning of pixel feature representation is also affected by the quality of the source image. As a solution to the challenge of learning more robust pixel feature representations, the proposed method improved the image feature matching network LoFTR. For the coarse granularity feature reconstruction branch, the specificity constraint was defined to maximize the feature distance between pixels within the same image, enabling strong distinguishability between different pixels. The repeatability constraint was defined to minimize the feature distance between the matched pixels from different images, enabling strong similarity between the matched pixels across different images and thus enhancing the accuracy of matching. Additionally, an image reconstruction layer was incorporated into the decoding phase of the Backbone, and image reconstruction loss was defined to constrain the encoder to learn more robust feature representation. The experimental results on indoor dataset ScanNet and outdoor dataset MegeDepth show the effectiveness of the proposed method. Furthermore, based on images with different qualities, it is verified that the proposed method can better adapt to image feature matching when the source images have different quality.

Key words: deep learning, image feature matching, repeatability, specificity, image reconstruction loss

CLC Number: