欢迎访问《图学学报》 分享到:

图学学报 ›› 2023, Vol. 44 ›› Issue (5): 861-867.DOI: 10.11996/JG.j.2095-302X.2023050861

• 图像处理与计算机视觉 • 上一篇    下一篇

基于参考的Transformer纹理迁移深度图像超分辨率重建

杨陈成1(), 董秀成1,2(), 侯兵1, 张党成1, 向贤明1, 冯琪茗1   

  1. 1.西华大学电气与电子信息学院,四川 成都 611730
    2.四川大学锦江学院,四川 眉山 620860
  • 收稿日期:2023-01-31 接受日期:2023-05-08 出版日期:2023-10-31 发布日期:2023-10-31
  • 通讯作者: 董秀成(1963-),男,教授,硕士。主要研究方向为智能信息处理、计算机视觉等。E-mail:dxc136@163.com
  • 作者简介:杨陈成(1998-),女,硕士研究生。主要研究方向为图像处理、深度学习。E-mail:yangchencheng2017@163.com
  • 基金资助:
    国家自然科学基金项目(11872069);四川省中央引导地方科技发展专项(2021ZYD0034);四威高科-西华大学产学研联合实验室项目(2016-YF04-00044-JH)

Reference based transformer texture migrates depth images super resolution reconstruction

YANG Chen-cheng1(), DONG Xiu-cheng1,2(), HOU Bing1, ZHANG Dang-cheng1, XIANG Xian-ming1, FENG Qi-ming1   

  1. 1. School of Electrical Engineering and Electronic Information, Xihua University, Chengdu Sichuan 611730, China
    2. Jinjiang College, Sichuan University, Meishan Sichuan 620860, China
  • Received:2023-01-31 Accepted:2023-05-08 Online:2023-10-31 Published:2023-10-31
  • Contact: DONG Xiu-cheng (1963-), professor, master. His main research interests cover intelligent information processing, computer vision, etc. E-mail:dxc136@163.com
  • About author:YANG Chen-cheng (1998-), master student. Her main research interests cover image processing and deep learning. E-mail:yangchencheng2017@163.com
  • Supported by:
    National Natural Science Foundation of China(11872069);Central Government Funds of Guiding Local Scientific and Technological Development for Sichuan Province(2021ZYD0034);Siwei Hi-tech-Xihua University Industry-University-Research Joint Laboratory(2016-YF04-00044-JH)

摘要:

深度图像包含场景深度信息,对颜色和光照的变化具有较强的鲁棒性,使得深度图像在立体视觉等领域广泛应用。由于深度传感器性能的局限性以及深度图像成像环境相对复杂,很难直接获取高质量、高分辨率的深度图像。针对重建出现的边缘细节特征不清晰问题,提出一种基于参考的Transformer纹理迁移深度图像超分辨率重建方法。对预处理后的低分辨率深度图像(LR_D)以及参考图像(Ref)特征块,利用归一化内积进行相似度计算,融合Transformer计算相似位置置信度,并结合注意力机制进行纹理迁移,最后与低分辨率深度图像特征结合,提高图像细节清晰度,进一步精确重建结果。实验结果表明,相较于其他方法,该方法结构相似性(SSIM)值更高,主观视觉效果和客观评价指标均得到了明显的改善,重建效果良好。

关键词: 深度学习, 超分辨率重建, 深度图像, Transformer, 注意力机制

Abstract:

Depth images contain scene depth information and exhibit strong robustness to variations in color and lighting, making them widely used in fields such as stereo vision. However, due to the limitations in depth sensor performance and the complexity of imaging environments, it is challenging to directly obtain high-quality, high-resolution depth images. To address the problem of unclear edge details in reconstructed depth images, a reference-based Transformer texture transfer method for deep image super-resolution reconstruction was proposed. For the preprocessed low-resolution depth images (LR_D) and reference images (Ref) feature blocks, similarity calculation was performed using normalized inner product. The method integrated Transformer to calculate the confidence of similarity positions, and combined it with an attention mechanism for texture transfer. Finally, the method combined the features of the low-resolution depth images to improve image detail clarity and further accurately reconstruct the results. The experimental results demonstrated that compared to other methods, the proposed method could achieve higher structural similarity (SSIM) values, and that both subjective visual effects and objective evaluation indicators have been significantly improved, indicating the excellence of the reconstruction performance.

Key words: deep learning, super-resolution reconstruction, depth image, Transformer, attention mechanism

中图分类号: