Journal of Graphics ›› 2023, Vol. 44 ›› Issue (5): 861-867.DOI: 10.11996/JG.j.2095-302X.2023050861
• Image Processing and Computer Vision • Previous Articles Next Articles
YANG Chen-cheng1(), DONG Xiu-cheng1,2(
), HOU Bing1, ZHANG Dang-cheng1, XIANG Xian-ming1, FENG Qi-ming1
Received:
2023-01-31
Accepted:
2023-05-08
Online:
2023-10-31
Published:
2023-10-31
Contact:
DONG Xiu-cheng (1963-), professor, master. His main research interests cover intelligent information processing, computer vision, etc. E-mail:About author:
YANG Chen-cheng (1998-), master student. Her main research interests cover image processing and deep learning. E-mail:yangchencheng2017@163.com
Supported by:
CLC Number:
YANG Chen-cheng, DONG Xiu-cheng, HOU Bing, ZHANG Dang-cheng, XIANG Xian-ming, FENG Qi-ming. Reference based transformer texture migrates depth images super resolution reconstruction[J]. Journal of Graphics, 2023, 44(5): 861-867.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2023050861
名称 | 实验配置 |
---|---|
操作系统 | ubantu 20.04 |
编程语言 | Python 3.7.9 |
深度学习框架 | PyTorch 1.8.0 |
CPU | Intel Xeon Gold 6226 |
GPU | NVIDIA RTX3090 |
Cuda | Cuda 11.1 |
平台 | Pycharm-2021.2.1 |
Table 1 Experimental software and hardware on figuration
名称 | 实验配置 |
---|---|
操作系统 | ubantu 20.04 |
编程语言 | Python 3.7.9 |
深度学习框架 | PyTorch 1.8.0 |
CPU | Intel Xeon Gold 6226 |
GPU | NVIDIA RTX3090 |
Cuda | Cuda 11.1 |
平台 | Pycharm-2021.2.1 |
Fig. 5 Comparison of reconstruction effects of different algorithms on Mid3 when the up-sampling factor is 4 ((a) High resolution depth images; (b) Bicubic interpolation results; (c) SRCNN algorithm results; (d) ESPCN algorithm results; (e) Algorithm results in this article; (f) Partial enlarged view of figure (a); (g) Partial enlarged view of figure (b); (h) Partial enlarged view of figure (c); (i) Partial enlarged view of figure (d); (j) Partial enlarged view of figure (e))
Fig. 6 Comparison of reconstruction effects of different algorithms on Mid3 when the up-sampling factor is 2 ((a) High resolution depth images; (b) Bicubic interpolation results; (c) SRCNN algorithm results; (d) ESPCN algorithm results; (e) Algorithm results in this article; (f) Partial enlarged view of figure (a); (g) Partial enlarged view of figure (b); (h) Partial enlarged view of figure (c); (i) Partial enlarged view of figure (d); (j) Partial enlarged view of figure (e))
算法 | ×2 | ×3 | ×4 | ||||||
---|---|---|---|---|---|---|---|---|---|
Art | Books | Moebius | Art | Books | Moebius | Art | Books | Moebius | |
Bicubic | 0.989 8 | 0.996 1 | 0.997 3 | 0.976 8 | 0.993 4 | 0.993 9 | 0.967 4 | 0.990 7 | 0.989 2 |
SRCNN | 0.989 8 | 0.996 1 | 0.997 6 | 0.981 0 | 0.993 9 | 0.993 6 | 0.972 7 | 0.991 1 | 0.989 4 |
ESPCN | 0.989 9 | 0.996 1 | 0.997 7 | 0.975 9 | 0.994 0 | 0.994 3 | 0.974 0 | 0.991 2 | 0.989 7 |
Ours | 0.999 8 | 0.999 9 | 0.999 9 | 0.999 5 | 0.999 7 | 0.999 6 | 0.997 7 | 0.999 5 | 0.999 3 |
Table 2 Quantitative analysis of reconstruction results of different algorithms on Mid3 (SSIM)
算法 | ×2 | ×3 | ×4 | ||||||
---|---|---|---|---|---|---|---|---|---|
Art | Books | Moebius | Art | Books | Moebius | Art | Books | Moebius | |
Bicubic | 0.989 8 | 0.996 1 | 0.997 3 | 0.976 8 | 0.993 4 | 0.993 9 | 0.967 4 | 0.990 7 | 0.989 2 |
SRCNN | 0.989 8 | 0.996 1 | 0.997 6 | 0.981 0 | 0.993 9 | 0.993 6 | 0.972 7 | 0.991 1 | 0.989 4 |
ESPCN | 0.989 9 | 0.996 1 | 0.997 7 | 0.975 9 | 0.994 0 | 0.994 3 | 0.974 0 | 0.991 2 | 0.989 7 |
Ours | 0.999 8 | 0.999 9 | 0.999 9 | 0.999 5 | 0.999 7 | 0.999 6 | 0.997 7 | 0.999 5 | 0.999 3 |
[1] | HORNÁCEK M, RHEMANN C, GELAUTZ M, et al. Depth super resolution by rigid body self-similarity in 3D[C]// 2013 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2013: 1123-1130. |
[2] |
LEI J J, LI L L, YUE H J, et al. Depth map super-resolution considering view synthesis quality[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2017, 26(4): 1732-1745.
DOI URL |
[3] | XIE J, FERIS R S, SUN M T. Edge-guided single depth image super resolution[C]// 2014 IEEE International Conference on Image Processing. New York: IEEE Press, 2016: 3773-3777. |
[4] |
LIU W, CHEN X G, YANG J, et al. Robust color guided depth map restoration[J]. IEEE Transactions on Image Processing, 2017, 26(1): 315-327.
PMID |
[5] | 李滔, 董秀成, 张晓华. 深度图像超分辨率重建技术综述[J]. 西华大学学报: 自然科学版, 2020, 39(4): 45-53. |
LI T, DONG X C, ZHANG X H. A survey of super-resolution reconstruction technologies for depth image[J]. Journal of Xihua University: Natural Science Edition, 2020, 39(4): 45-53. (in Chinese) | |
[6] | MAC AODHA O, CAMPBELL N D F, NAIR A, et al. Patch based synthesis for single depth image super-resolution[C]// European Conference on Computer Vision. Heidelberg: Springer, 2012: 71-84. |
[7] |
DONG C, LOY C C, HE K M, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2): 295-307.
DOI PMID |
[8] | LI Y J, HUANG J B, AHUJA N, et al. Deep Joint Image Filtering[C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2016: 154-169. |
[9] | HUI T W, LOY C C, TANG X O. Depth Map Super-Resolution by Deep Multi-Scale Guidance[C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2016: 353-369. |
[10] | ZHAN Y L, CHI J, YE Y N, et al. Super resolution image reconstruction based on image similarity and feature combination[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(6): 1018-1029. |
[11] | ZHENG H T, JI M Q, WANG H Q, et al. CrossNet: an end-to-end reference-based super resolution network using cross-scale warping[M]// Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 88-104. |
[12] | YANG F Z, YANG H, FU J L, et al. Learning texture transformer network for image super-resolution[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 5790-5799. |
[13] | SUN B L, YE X C, LI B P, et al. Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7788-7797. |
[14] | ZHENG H T, JI M Q, HAN L, et al. Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition[M]. New York: IEEE Press, 2021: 7788-7797. |
[15] |
YUE H J, SUN X Y, YANG J Y, et al. Landmark image super-resolution by retrieving web images[J]. IEEE Transactions on Image Processing, 2013, 22(12): 4865-4878.
DOI PMID |
[16] | ZHU Y, ZHANG Y N, YUILLE A L. Single image super-resolution using deformable patches[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2014: 2917-2924. |
[17] |
XIA B, TIAN Y P, HANG Y C, et al. Coarse-to-fine embedded PatchMatch and multi-scale dynamic aggregation for reference-based super-resolution[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(3): 2768-2776.
DOI URL |
[18] | ZHANG Z F, WANG Z W, LIN Z, et al. Image super-resolution by neural texture transfer[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 7974-7983. |
[19] | GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[EB/OL]. [2022-12-06]. https://arxiv.org/abs/1704.00028. |
[20] | JOHNSON J, ALAHI A, LI F F. Perceptual Losses for Real-Time Style Transfer and Super-Resolution[C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2016: 694-711. |
[21] | LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 105-114. |
[22] | SAJJADI M S M, SCHÖLKOPF B, HIRSCH M. EnhanceNet: single image super-resolution through automated texture synthesis[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 4501-4510. |
[23] | ZHANG Z F, WANG Z W, LIN Z, et al. Image super-resolution by neural texture transfer[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 7974-7983. |
[24] | 范佩佩, 董秀成, 李滔, 等. 基于非局部均值约束的深度图像超分辨率重建[J]. 计算机辅助设计与图形学学报, 2020, 32(10): 1671-1678. |
FAN P P, DONG X C, LI T, et al. Super-resolution reconstruction of depth map based on non-local means constraint[J]. Journal of Computer-Aided Design & Computer Graphics, 2020, 32(10): 1671-1678. (in Chinese) | |
[25] | LOWE D G. Object recognition from local scale-invariant features[C]// The 7th IEEE International Conference on Computer Vision. New York: IEEE Press, 2002: 1150-1157. |
[26] | SHI W Z, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 1874-1883. |
[1] |
WEI Chen-hao, YANG Rui, LIU Zhen-bing, LAN Ru-shi, SUN Xi-yan, LUO Xiao-nan.
YOLOv8 with bi-level routing attention for road scene object detection
[J]. Journal of Graphics, 2023, 44(6): 1104-1111.
|
[2] |
DING Jian-chuan, XIAO Jin-tong, ZHAO Ke-xin, JIA Dong-qing, CUI Bing-de, YANG Xin.
Spiking neural network-based navigation and obstacle avoidance algorithm for complex scenes
[J]. Journal of Graphics, 2023, 44(6): 1121-1129.
|
[3] |
ZHOU Rui-chuang, TIAN Jin, YAN Feng-ting, ZHU Tian-xiao, ZHANG Yu-jin.
Point cloud classification model incorporating external attention and graph convolution
[J]. Journal of Graphics, 2023, 44(6): 1162-1172.
|
[4] |
HUANG Shao-nian, WEN Pei-ran, QUAN Qi, CHEN Rong-yuan.
Future frame prediction based on multi-branch aggregation for lightweight video anomaly detection
[J]. Journal of Graphics, 2023, 44(6): 1173-1182.
|
[5] |
ZHANG Li-yuan, ZHAO Hai-rong, HE Wei, TANG Xiong-feng.
Knee cysts detection algorithm based on Mask R-CNN integrating global-local attention module
[J]. Journal of Graphics, 2023, 44(6): 1183-1190.
|
[6] |
SHI Jia-hao, YAO Li.
Video captioning based on semantic guidance
[J]. Journal of Graphics, 2023, 44(6): 1191-1201.
|
[7] |
ZHANG Chi, ZHANG Xiao-juan, ZHAO Yang, YANG Fan.
Palette-based semi-interactive low-light Thangka images enhancement
[J]. Journal of Graphics, 2023, 44(6): 1202-1211.
|
[8] |
WANG Ji, WANG Sen, JIANG Zhi-wen, XIE Zhi-feng, LI Meng-tian.
Zero-shot text-driven avatar generation based on depth-conditioned diffusion model
[J]. Journal of Graphics, 2023, 44(6): 1218-1226.
|
[9] | DANG Hong-she, XU Huai-biao, ZHANG Xuan-de. Deep learning stereo matching algorithm fusing structural information [J]. Journal of Graphics, 2023, 44(5): 899-906. |
[10] | ZHAI Yong-jie, GUO Cong-bin, WANG Qian-ming, ZHAO Kuan, BAI Yun-shan, ZHANG Ji. Multi-fitting detection method for transmission lines based on implicit spatial knowledge fusion [J]. Journal of Graphics, 2023, 44(5): 918-927. |
[11] | YANG Hong-ju, GAO Min, ZHANG Chang-you, BO Wen, WU Wen-jia, CAO Fu-yuan. A local optimization generation model for image inpainting [J]. Journal of Graphics, 2023, 44(5): 955-965. |
[12] | SONG Huan-sheng, WEN Ya, SUN Shi-jie, SONG Xiang-yu, ZHANG Chao-yang, LI Xu. Tunnel fire detection based on improved student-teacher network [J]. Journal of Graphics, 2023, 44(5): 978-987. |
[13] | BI Chun-yan, LIU Yue. A survey of video human action recognition based on deep learning [J]. Journal of Graphics, 2023, 44(4): 625-639. |
[14] | LI Li-xia, WANG Xin, WANG Jun, ZHANG You-yuan. Small object detection algorithm in UAV image based on feature fusion and attention mechanism [J]. Journal of Graphics, 2023, 44(4): 658-666. |
[15] | HAO Shuai, ZHAO Xin-sheng, MA Xu, ZHANG Xu, HE Tian, HOU Li-xiang. Multi-class defect target detection method for transmission lines based on TR-YOLOv5 [J]. Journal of Graphics, 2023, 44(4): 667-676. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||