Journal of Graphics ›› 2023, Vol. 44 ›› Issue (6): 1202-1211.DOI: 10.11996/JG.j.2095-302X.2023061202
Previous Articles Next Articles
ZHANG Chi1,2(), ZHANG Xiao-juan1,2(
), ZHAO Yang3, YANG Fan1,2
Received:
2023-06-20
Accepted:
2023-09-22
Online:
2023-12-31
Published:
2023-12-17
Contact:
ZHANG Xiao-juan (1968-), professor, master. Her main research interests cover digital protection of intangible cultural heritage, computer vision, etc. E-mail:About author:
ZHANG Chi (1996-), master student. His main research interests cover digital protection of intangible cultural heritage, computer vision. E-mail:756629946@qq.com
Supported by:
CLC Number:
ZHANG Chi, ZHANG Xiao-juan, ZHAO Yang, YANG Fan. Palette-based semi-interactive low-light Thangka images enhancement[J]. Journal of Graphics, 2023, 44(6): 1202-1211.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2023061202
方法 | ExDark | LIME | 无参考唐卡数据集 | |||
---|---|---|---|---|---|---|
NIQE | PIQE | NIQE | PIQE | NIQE | PIQE | |
KinD | 10.835 9 | 11.966 3 | 16.256 8 | 16.583 1 | 22.568 6 | 30.218 5 |
EnlightenGAN | 10.320 1 | 10.365 6 | 15.239 8 | 10.494 5 | 20.760 9 | 16.625 2 |
Zero-DCE | 11.322 2 | 12.789 5 | 13.407 4 | 11.640 7 | 19.401 4 | 12.369 1 |
RRDNet | 9.961 5 | 11.750 6 | 13.763 7 | 10.796 2 | 18.351 6 | 12.337 7 |
RUAS | 9.683 5 | 13.370 8 | 12.719 0 | 14.055 6 | 18.533 2 | 17.251 1 |
SCI | 9.236 7 | 13.042 5 | 13.027 8 | 12.637 9 | 18.960 1 | 11.938 6 |
RCUNet | 9.663 7 | 13.468 1 | 12.457 3 | 11.547 0 | 17.569 5 | 9.149 6 |
Table 1 Results of NIQE and PIQE using different methods on ExDark, LIME and unreferenced Thangka datasets
方法 | ExDark | LIME | 无参考唐卡数据集 | |||
---|---|---|---|---|---|---|
NIQE | PIQE | NIQE | PIQE | NIQE | PIQE | |
KinD | 10.835 9 | 11.966 3 | 16.256 8 | 16.583 1 | 22.568 6 | 30.218 5 |
EnlightenGAN | 10.320 1 | 10.365 6 | 15.239 8 | 10.494 5 | 20.760 9 | 16.625 2 |
Zero-DCE | 11.322 2 | 12.789 5 | 13.407 4 | 11.640 7 | 19.401 4 | 12.369 1 |
RRDNet | 9.961 5 | 11.750 6 | 13.763 7 | 10.796 2 | 18.351 6 | 12.337 7 |
RUAS | 9.683 5 | 13.370 8 | 12.719 0 | 14.055 6 | 18.533 2 | 17.251 1 |
SCI | 9.236 7 | 13.042 5 | 13.027 8 | 12.637 9 | 18.960 1 | 11.938 6 |
RCUNet | 9.663 7 | 13.468 1 | 12.457 3 | 11.547 0 | 17.569 5 | 9.149 6 |
方法 | 有参考唐卡数据集 | |
---|---|---|
PSNR | SSIM | |
KinD | 16.197 4 | 0.840 3 |
EnlightenGAN | 17.585 5 | 0.865 5 |
Zero-DCE | 18.238 2 | 0.852 0 |
RRDNet | 17.892 5 | 0.846 4 |
RUAS | 15.679 5 | 0.745 7 |
SCI | 16.669 1 | 0.843 5 |
RCUNet | 17.813 4 | 0.843 0 |
P-RCUNet | 19.781 3 | 0.861 6 |
Table 2 Results of PSNR and SSIM using different methods on a referenced Thangka datasets
方法 | 有参考唐卡数据集 | |
---|---|---|
PSNR | SSIM | |
KinD | 16.197 4 | 0.840 3 |
EnlightenGAN | 17.585 5 | 0.865 5 |
Zero-DCE | 18.238 2 | 0.852 0 |
RRDNet | 17.892 5 | 0.846 4 |
RUAS | 15.679 5 | 0.745 7 |
SCI | 16.669 1 | 0.843 5 |
RCUNet | 17.813 4 | 0.843 0 |
P-RCUNet | 19.781 3 | 0.861 6 |
Fig. 6 Visual comparison results of referenced Thangka datasets ((a) Input; (b) KinD; (c) EnlightenGAN; (d) Zero-DCE; (e) RRDNet; (f) RUAS; (g) SCI; (h) RCUNet; (i) P-RCUNet)
Fig. 7 Example images of semi interactive color correction based on color palette ((a) Input; (b) Ground Truth; (c) RCUNet and corresponding color palette; (d) P-RCUNet and corresponding color palette)
Fig. 8 Visualization results after splicing (local) ((a) Input; (b) KinD; (c) EnlightenGAN; (d) Zero-DCE; (e) RRDNet; (f) RUAS; (g) SCI; (h) RCUNet; (i) P-RCUNet)
方法 | 实验组1 | 实验组2 | 实验组3 | 加权平均 |
---|---|---|---|---|
KinD | 3.22 | 3.19 | 3.12 | 3.151 |
EnlightenGAN | 3.78 | 3.82 | 3.92 | 3.876 |
Zero-DCE | 3.10 | 2.98 | 3.05 | 3.034 |
RRDNet | 3.82 | 3.91 | 4.04 | 3.979 |
RUAS | 3.31 | 3.27 | 3.19 | 3.226 |
SCI | 3.24 | 3.16 | 3.17 | 3.174 |
RCUNet | 3.82 | 3.99 | 3.98 | 3.967 |
P-RCUNet | 3.92 | 4.06 | 4.11 | 4.076 |
Table 3 Subjective evaluation results of Thangka low illumination enhancement results
方法 | 实验组1 | 实验组2 | 实验组3 | 加权平均 |
---|---|---|---|---|
KinD | 3.22 | 3.19 | 3.12 | 3.151 |
EnlightenGAN | 3.78 | 3.82 | 3.92 | 3.876 |
Zero-DCE | 3.10 | 2.98 | 3.05 | 3.034 |
RRDNet | 3.82 | 3.91 | 4.04 | 3.979 |
RUAS | 3.31 | 3.27 | 3.19 | 3.226 |
SCI | 3.24 | 3.16 | 3.17 | 3.174 |
RCUNet | 3.82 | 3.99 | 3.98 | 3.967 |
P-RCUNet | 3.92 | 4.06 | 4.11 | 4.076 |
方法 | 无参考唐卡数据集 | |
---|---|---|
NIQE | PIQE | |
RCUNet | 17.569 5 | 9.149 6 |
w/o CBAM | 21.203 5 | 10.573 9 |
w/o Lexp | 21.490 0 | 11.750 6 |
w/o Ltv | 21.253 1 | 10.817 3 |
w/o Lspa | 20.358 7 | 9.895 7 |
w/o Lcolor | 20.997 4 | 10.282 2 |
Table 4 Results of ablation experiment (unreferenced Thangka dataset)
方法 | 无参考唐卡数据集 | |
---|---|---|
NIQE | PIQE | |
RCUNet | 17.569 5 | 9.149 6 |
w/o CBAM | 21.203 5 | 10.573 9 |
w/o Lexp | 21.490 0 | 11.750 6 |
w/o Ltv | 21.253 1 | 10.817 3 |
w/o Lspa | 20.358 7 | 9.895 7 |
w/o Lcolor | 20.997 4 | 10.282 2 |
方法 | 有参考唐卡数据集 | |
---|---|---|
PSNR | SSIM | |
RCUNet | 17.813 4 | 0.843 0 |
w/o CBAM | 15.632 4 | 0.760 7 |
w/o Lexp | 14.857 7 | 0.725 9 |
w/o Ltv | 15.709 7 | 0.760 0 |
w/o Lspa | 14.914 8 | 0.740 6 |
w/o Lcolor | 15.920 7 | 0.771 1 |
Table 5 Results of ablation experiment (reference Tangka dataset)
方法 | 有参考唐卡数据集 | |
---|---|---|
PSNR | SSIM | |
RCUNet | 17.813 4 | 0.843 0 |
w/o CBAM | 15.632 4 | 0.760 7 |
w/o Lexp | 14.857 7 | 0.725 9 |
w/o Ltv | 15.709 7 | 0.760 0 |
w/o Lspa | 14.914 8 | 0.740 6 |
w/o Lcolor | 15.920 7 | 0.771 1 |
[1] |
LAND E H. An alternative technique for the computation of the designator in the retinex theory of color vision[J]. Proceedings of the National Academy of Sciences of the United States of America, 1986, 83(10): 3078-3080.
PMID |
[2] |
LORE K G, AKINTAYO A, SARKAR S. LLNet: a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognition, 2017, 61: 650-662.
DOI URL |
[3] |
JIANG Y F, GONG X Y, LIU D, et al. EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2021, 30: 2340-2349.
DOI URL |
[4] | GUO C L, LI C Y, GUO J C, et al. Zero-reference deep curve estimation for low-light image enhancement[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 1777-1786. |
[5] | WEI C, WANG W J, YANG W H, et al. Deep retinex decomposition for low-light enhancement[EB/OL]. [2023-02-13]. https://arxiv.org/abs/1808.04560.pdf. |
[6] | ZHANG Y H, ZHANG J W, GUO X J. Kindling the darkness: a practical low-light image enhancer[C]// The 27th ACM International Conference on Multimedia. New York: ACM, 2019: 1632-1640. |
[7] | ZHU A Q, ZHANG L, SHEN Y, et al. Zero-shot restoration of underexposed images via robust retinex decomposition[C]// 2020 IEEE International Conference on Multimedia and Expo. New York: IEEE Press, 2020: 1-6. |
[8] | LIU R S, MA L, ZHANG J A, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 10556-10565. |
[9] | MA L, MA T Y, LIU R S, et al. Toward fast, flexible, and robust low-light image enhancement[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5627-5636. |
[10] | WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[M]//Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 3-19. |
[11] | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015: 234-241. |
[12] |
LI M D, LIU J Y, YANG W H, et al. Structure-revealing low-light image enhancement via robust retinex model[J]. IEEE Transactions on Image Processing, 2018, 27(6): 2828-2841.
DOI PMID |
[13] | CHANG H W, FRIED O, LIU Y M, et al. Palette-based photo recoloring[J]. ACM Transactions on Graphics, 2015, 34(4): 139:1-139:11. |
[14] |
LI C Y, GUO C L, HAN L H, et al. Low-light image and video enhancement using deep learning: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(12): 9396-9416.
DOI URL |
[15] |
LEE C, LEE C, KIM C S. Contrast enhancement based on layered difference representation of 2D histograms[J]. IEEE Transactions on Image Processing, 2013, 22(12): 5372-5384.
PMID |
[16] |
LOH Y P, CHAN C S. Getting to know low-light images with the exclusively dark dataset[J]. Computer Vision and Image Understanding, 2019, 178: 30-42.
DOI |
[17] |
GUO X J, LI Y, LING H B. LIME: low-light image enhancement via illumination map estimation[J]. IEEE Transactions on Image Processing, 2017, 26(2): 982-993.
DOI PMID |
[18] |
MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209-212.
DOI URL |
[19] | VENKATANATH N, PRANEETH D, BH M C, et al. Blind image quality evaluation using perception based features[C]// The 21st National Conference on Communications. New York: IEEE Press, 2015: 1-6. |
[1] |
GUO Zongyang, LIU Lidong, JIANG Donghua, LIU Zixiang, ZHU Shukang, CHEN Jinghua.
Human action recognition algorithm based on semantics guided neural networks
[J]. Journal of Graphics, 2024, 45(1): 26-34.
|
[2] |
ZHAI Yongjie, ZHAO Xiaoyu, WANG Luyao, WANG Yaru, SONG Xiaoke, ZHU Haoshuo.
IDD-YOLOv7: a lightweight method for multiple defect detection of insulators in transmission lines
[J]. Journal of Graphics, 2024, 45(1): 90-101.
|
[3] |
GU Tianjun, XIONG Suya, LIN Xiao.
Diversified generation of theatrical masks based on SASGAN
[J]. Journal of Graphics, 2024, 45(1): 102-111.
|
[4] |
CUI Kebin, JIAO Jingyi.
Steel surface defect detection algorithm based on MCB-FAH-YOLOv8
[J]. Journal of Graphics, 2024, 45(1): 112-125.
|
[5] | WEI Chen-hao, YANG Rui, LIU Zhen-bing, LAN Ru-shi, SUN Xi-yan, LUO Xiao-nan. YOLOv8 with bi-level routing attention for road scene object detection [J]. Journal of Graphics, 2023, 44(6): 1104-1111. |
[6] | DING Jian-chuan, XIAO Jin-tong, ZHAO Ke-xin, JIA Dong-qing, CUI Bing-de, YANG Xin. Spiking neural network-based navigation and obstacle avoidance algorithm for complex scenes [J]. Journal of Graphics, 2023, 44(6): 1121-1129. |
[7] | ZHANG Li-yuan, ZHAO Hai-rong, HE Wei, TANG Xiong-feng. Knee cysts detection algorithm based on Mask R-CNN integrating global-local attention module [J]. Journal of Graphics, 2023, 44(6): 1183-1190. |
[8] | YANG Chen-cheng, DONG Xiu-cheng, HOU Bing, ZHANG Dang-cheng, XIANG Xian-ming, FENG Qi-ming. Reference based transformer texture migrates depth images super resolution reconstruction [J]. Journal of Graphics, 2023, 44(5): 861-867. |
[9] | SONG Huan-sheng, WEN Ya, SUN Shi-jie, SONG Xiang-yu, ZHANG Chao-yang, LI Xu. Tunnel fire detection based on improved student-teacher network [J]. Journal of Graphics, 2023, 44(5): 978-987. |
[10] | LI Li-xia, WANG Xin, WANG Jun, ZHANG You-yuan. Small object detection algorithm in UAV image based on feature fusion and attention mechanism [J]. Journal of Graphics, 2023, 44(4): 658-666. |
[11] | LI Xin, PU Yuan-yuan, ZHAO Zheng-peng, XU Dan, QIAN Wen-hua. Content semantics and style features match consistent artistic style transfer [J]. Journal of Graphics, 2023, 44(4): 699-709. |
[12] | YU Wei-qun, LIU Jia-tao, ZHANG Ya-ping. Monocular depth estimation based on Laplacian pyramid with attention fusion [J]. Journal of Graphics, 2023, 44(4): 728-738. |
[13] | HU Xin, ZHOU Yun-qiang, XIAO Jian, YANG Jie. Surface defect detection of threaded steel based on improved YOLOv5 [J]. Journal of Graphics, 2023, 44(3): 427-437. |
[14] | HAO Peng-fei, LIU Li-qun, GU Ren-yuan. YOLO-RD-Apple orchard heterogenous image obscured fruit detection model [J]. Journal of Graphics, 2023, 44(3): 456-464. |
[15] | LI Yu, YAN Tian-tian, ZHOU Dong-sheng, WEI Xiao-peng. Natural scene text detection based on attention mechanism and deep multi-scale feature fusion [J]. Journal of Graphics, 2023, 44(3): 473-481. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||