Journal of Graphics ›› 2025, Vol. 46 ›› Issue (2): 300-311.DOI: 10.11996/JG.j.2095-302X.2025020300
• Image Processing and Computer Vision • Previous Articles Next Articles
Received:
2024-07-29
Accepted:
2024-12-04
Online:
2025-04-30
Published:
2025-04-24
Contact:
LIU Liqun
About author:
First author contact:PAN Shuyan (2002-), master student. His main research interests cover deep learning and digital image processing.
E-mail:pansy@st.gsau.edu.cn
Supported by:
CLC Number:
PAN Shuyan, LIU Liqun. MSFAFuse: sar and optical image fusion model based on multi-scale feature information and attention mechanism[J]. Journal of Graphics, 2025, 46(2): 300-311.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025020300
Fig. 3 Feature enhancement module for salient information in heterogeneous images ((a) Structural feature enhancement module based on Gaussian Laplacian operator; (b) Significant regional feature enhancement module)
数据集 | SAR来源 | Opt来源 | 分辨率/m | 大小/像素 | 图片 总数/对 | 训练图象 数量/对 | 测试图像 数量/对 |
---|---|---|---|---|---|---|---|
OS dataset[ | 聚束模式的中国高分 3号C波段传感器 | 谷歌地球平台 | 1 | 256×256 | 10 692 | 10 640 | 52 |
QXS-SAROPT[ | 高分3号SAR影像 | 谷歌地球多光谱影像 | 1 | 256×256 | 20 000 | 19 984 | 16 |
Table 1 Dataset Introduction
数据集 | SAR来源 | Opt来源 | 分辨率/m | 大小/像素 | 图片 总数/对 | 训练图象 数量/对 | 测试图像 数量/对 |
---|---|---|---|---|---|---|---|
OS dataset[ | 聚束模式的中国高分 3号C波段传感器 | 谷歌地球平台 | 1 | 256×256 | 10 692 | 10 640 | 52 |
QXS-SAROPT[ | 高分3号SAR影像 | 谷歌地球多光谱影像 | 1 | 256×256 | 20 000 | 19 984 | 16 |
数据集 | 指标 | RFN-Nest | Nestfuse | CrossFuse 1 | U2Fusion | CrossFuse 2 | MSFAFuse (Ours) |
---|---|---|---|---|---|---|---|
OS dataset | SD | 31.313 | 31.664 | 32.611 | 35.250 | 29.349 | 40.077 |
EN | 6.914 | 6.865 | 6.931 | 7.029 | 6.723 | 7.141 | |
MI | 1.684 | 2.282 | 1.737 | 1.754 | 2.473 | 5.944 | |
AG | 9.583 | 11.170 | 12.587 | 11.716 | 5.154 | 8.424 | |
VIF | 0.338 | 0.401 | 0.352 | 0.340 | 0.497 | 1.042 | |
SSIM | 0.495 | 0.467 | 0.443 | 0.504 | 0.504 | 0.590 | |
MS-SSIM | 0.629 | 0.600 | 0.561 | 0.632 | 0.609 | 0.606 | |
FMI_pixel | 0.791 | 0.817 | 0.810 | 0.797 | 0.796 | 0.849 | |
FMI_dct | 0.408 | 0.454 | 0.475 | 0.388 | 0.274 | 0.494 | |
FMI_w | 0.406 | 0.468 | 0.475 | 0.420 | 0.307 | 0.505 | |
QXS-SAROPT | SD | 29.054 | 30.474 | 33.768 | 33.454 | 33.073 | 42.750 |
EN | 6.500 | 6.537 | 6.779 | 6.647 | 6.663 | 6.907 | |
MI | 2.066 | 2.355 | 2.402 | 2.142 | 2.250 | 6.656 | |
AG | 8.444 | 8.337 | 9.701 | 8.957 | 10.867 | 10.335 | |
VIF | 0.375 | 0.410 | 0.418 | 0.358 | 0.424 | 1.025 | |
SSIM | 0.480 | 0.501 | 0.513 | 0.491 | 0.479 | 0.589 | |
MS-SSIM | 0.562 | 0.572 | 0.555 | 0.582 | 0.537 | 0.598 | |
FMI_pixel | 0.814 | 0.819 | 0.821 | 0.808 | 0.801 | 0.866 | |
FMI_dct | 0.388 | 0.411 | 0.402 | 0.355 | 0.331 | 0.535 | |
FMI_w | 0.406 | 0.429 | 0.415 | 0.391 | 0.341 | 0.538 |
Table 2 Results of different fusion methods
数据集 | 指标 | RFN-Nest | Nestfuse | CrossFuse 1 | U2Fusion | CrossFuse 2 | MSFAFuse (Ours) |
---|---|---|---|---|---|---|---|
OS dataset | SD | 31.313 | 31.664 | 32.611 | 35.250 | 29.349 | 40.077 |
EN | 6.914 | 6.865 | 6.931 | 7.029 | 6.723 | 7.141 | |
MI | 1.684 | 2.282 | 1.737 | 1.754 | 2.473 | 5.944 | |
AG | 9.583 | 11.170 | 12.587 | 11.716 | 5.154 | 8.424 | |
VIF | 0.338 | 0.401 | 0.352 | 0.340 | 0.497 | 1.042 | |
SSIM | 0.495 | 0.467 | 0.443 | 0.504 | 0.504 | 0.590 | |
MS-SSIM | 0.629 | 0.600 | 0.561 | 0.632 | 0.609 | 0.606 | |
FMI_pixel | 0.791 | 0.817 | 0.810 | 0.797 | 0.796 | 0.849 | |
FMI_dct | 0.408 | 0.454 | 0.475 | 0.388 | 0.274 | 0.494 | |
FMI_w | 0.406 | 0.468 | 0.475 | 0.420 | 0.307 | 0.505 | |
QXS-SAROPT | SD | 29.054 | 30.474 | 33.768 | 33.454 | 33.073 | 42.750 |
EN | 6.500 | 6.537 | 6.779 | 6.647 | 6.663 | 6.907 | |
MI | 2.066 | 2.355 | 2.402 | 2.142 | 2.250 | 6.656 | |
AG | 8.444 | 8.337 | 9.701 | 8.957 | 10.867 | 10.335 | |
VIF | 0.375 | 0.410 | 0.418 | 0.358 | 0.424 | 1.025 | |
SSIM | 0.480 | 0.501 | 0.513 | 0.491 | 0.479 | 0.589 | |
MS-SSIM | 0.562 | 0.572 | 0.555 | 0.582 | 0.537 | 0.598 | |
FMI_pixel | 0.814 | 0.819 | 0.821 | 0.808 | 0.801 | 0.866 | |
FMI_dct | 0.388 | 0.411 | 0.402 | 0.355 | 0.331 | 0.535 | |
FMI_w | 0.406 | 0.429 | 0.415 | 0.391 | 0.341 | 0.538 |
Fig. 8 Display of result images based on different fusion methods of dataset 1 ((a) SAR; (b) Visible light; (c) RFN-Nest[24]; (d) Nestfuse[25]; (e) CrossFuse1[26]; (f) U2Fusion[27]; (g) CrossFuse2[28]; (h) MSFAFuse (Ours))
数据集 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM | 下采样 |
---|---|---|---|---|---|---|---|---|
OS dataset | 30.690 | 6.789 | 2.522 | 9.070 | 0.595 | 0.545 | 0.580 | Maxpooling |
40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | RFD | |
QXS-SAROPT | 31.731 | 6.588 | 2.706 | 11.041 | 0.548 | 0.446 | 0.495 | Maxpooling |
42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 | RFD |
Table 3 Results of different downsampling methods
数据集 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM | 下采样 |
---|---|---|---|---|---|---|---|---|
OS dataset | 30.690 | 6.789 | 2.522 | 9.070 | 0.595 | 0.545 | 0.580 | Maxpooling |
40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | RFD | |
QXS-SAROPT | 31.731 | 6.588 | 2.706 | 11.041 | 0.548 | 0.446 | 0.495 | Maxpooling |
42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 | RFD |
数据集 | 特征增强模块 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | 无 | 35.445 | 7.039 | 2.737 | 9.022 | 0.471 | 0.542 | 0.627 |
有 | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | 无 | 39.430 | 6.997 | 4.019 | 10.087 | 0.606 | 0.510 | 0.551 |
有 | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
Table 4 Ablation experiment of feature enhancement module
数据集 | 特征增强模块 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | 无 | 35.445 | 7.039 | 2.737 | 9.022 | 0.471 | 0.542 | 0.627 |
有 | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | 无 | 39.430 | 6.997 | 4.019 | 10.087 | 0.606 | 0.510 | 0.551 |
有 | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
数据集 | 不同融合方法 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | Add | 31.693 | 6.863 | 0.721 | 11.206 | 0.051 | 0.048 | 0.064 |
Avg | 31.689 | 6.858 | 0.726 | 11.203 | 0.051 | 0.048 | 0.064 | |
Max | 40.882 | 7.050 | 1.161 | 10.892 | 0.068 | 0.054 | 0.065 | |
L1-Norm | 32.831 | 6.957 | 0.726 | 9.863 | 0.051 | 0.050 | 0.067 | |
Concat | 37.382 | 7.054 | 4.662 | 8.304 | 0.949 | 0.588 | 0.609 | |
Add_Conv | 37.865 | 7.071 | 4.678 | 8.301 | 0.953 | 0.590 | 0.614 | |
no_EAA | 39.295 | 7.072 | 6.060 | 8.107 | 1.036 | 0.590 | 0.607 | |
L2-Norm | 37.569 | 7.070 | 3.917 | 8.003 | 0.770 | 0.589 | 0.626 | |
Our fusion module | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | Add | 30.464 | 6.544 | 1.701 | 8.341 | 0.249 | 0.266 | 0.312 |
Avg | 30.460 | 6.534 | 1.702 | 8.337 | 0.249 | 0.266 | 0.312 | |
Max | 43.056 | 6.888 | 3.943 | 10.567 | 0.371 | 0.315 | 0.338 | |
L1-Norm | 39.684 | 6.883 | 2.307 | 9.881 | 0.347 | 0.297 | 0.342 | |
Concat | 39.801 | 6.843 | 4.344 | 10.511 | 0.872 | 0.577 | 0.597 | |
Add_Conv | 41.046 | 6.876 | 5.427 | 10.208 | 0.978 | 0.585 | 0.599 | |
no_EAA | 43.477 | 6.925 | 6.234 | 10.508 | 1.019 | 0.584 | 0.598 | |
L2-Norm | 42.438 | 6.897 | 6.334 | 10.244 | 1.023 | 0.587 | 0.599 | |
Our fusion module | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
Table 5 Analysis of different fusion strategies
数据集 | 不同融合方法 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | Add | 31.693 | 6.863 | 0.721 | 11.206 | 0.051 | 0.048 | 0.064 |
Avg | 31.689 | 6.858 | 0.726 | 11.203 | 0.051 | 0.048 | 0.064 | |
Max | 40.882 | 7.050 | 1.161 | 10.892 | 0.068 | 0.054 | 0.065 | |
L1-Norm | 32.831 | 6.957 | 0.726 | 9.863 | 0.051 | 0.050 | 0.067 | |
Concat | 37.382 | 7.054 | 4.662 | 8.304 | 0.949 | 0.588 | 0.609 | |
Add_Conv | 37.865 | 7.071 | 4.678 | 8.301 | 0.953 | 0.590 | 0.614 | |
no_EAA | 39.295 | 7.072 | 6.060 | 8.107 | 1.036 | 0.590 | 0.607 | |
L2-Norm | 37.569 | 7.070 | 3.917 | 8.003 | 0.770 | 0.589 | 0.626 | |
Our fusion module | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | Add | 30.464 | 6.544 | 1.701 | 8.341 | 0.249 | 0.266 | 0.312 |
Avg | 30.460 | 6.534 | 1.702 | 8.337 | 0.249 | 0.266 | 0.312 | |
Max | 43.056 | 6.888 | 3.943 | 10.567 | 0.371 | 0.315 | 0.338 | |
L1-Norm | 39.684 | 6.883 | 2.307 | 9.881 | 0.347 | 0.297 | 0.342 | |
Concat | 39.801 | 6.843 | 4.344 | 10.511 | 0.872 | 0.577 | 0.597 | |
Add_Conv | 41.046 | 6.876 | 5.427 | 10.208 | 0.978 | 0.585 | 0.599 | |
no_EAA | 43.477 | 6.925 | 6.234 | 10.508 | 1.019 | 0.584 | 0.598 | |
L2-Norm | 42.438 | 6.897 | 6.334 | 10.244 | 1.023 | 0.587 | 0.599 | |
Our fusion module | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
数据集 | 梯度损失训练网络 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | 无 | 39.011 | 7.089 | 6.612 | 8.059 | 1.050 | 0.588 | 0.604 |
有 | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | 无 | 42.146 | 6.916 | 5.886 | 10.168 | 0.990 | 0.566 | 0.591 |
有 | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
Table 6 Ablation experiment of gradient loss
数据集 | 梯度损失训练网络 | SD | EN | MI | AG | VIF | SSIM | MS-SSIM |
---|---|---|---|---|---|---|---|---|
OS dataset | 无 | 39.011 | 7.089 | 6.612 | 8.059 | 1.050 | 0.588 | 0.604 |
有 | 40.077 | 7.141 | 5.944 | 8.424 | 1.042 | 0.590 | 0.606 | |
QXS-SAROPT | 无 | 42.146 | 6.916 | 5.886 | 10.168 | 0.990 | 0.566 | 0.591 |
有 | 42.750 | 6.907 | 6.656 | 10.335 | 1.025 | 0.589 | 0.598 |
方法 | Parameter/M | Time/s |
---|---|---|
RFN-Nest | 30.096 | 6.03 |
Nestfuse | 10.930 | 2.47 |
CrossFuse 1 | 23.310 | 7.46 |
U2Fusion | 0.660 | 4.78 |
CrossFuse 2 | 1.360 | 5.33 |
MSFAFuse (Ours) | 9.930 | 9.06 |
Table 7 Model efficiency analysis
方法 | Parameter/M | Time/s |
---|---|---|
RFN-Nest | 30.096 | 6.03 |
Nestfuse | 10.930 | 2.47 |
CrossFuse 1 | 23.310 | 7.46 |
U2Fusion | 0.660 | 4.78 |
CrossFuse 2 | 1.360 | 5.33 |
MSFAFuse (Ours) | 9.930 | 9.06 |
[1] | SHAKYA A, BISWAS M, PAL M. Fusion and classification of SAR and optical data using multi-image color components with differential gradients[J]. Remote Sensing, 2023, 15(1): 274. |
[2] |
盛佳佳, 杨学志, 董张玉, 等. 基于NSST-IHS变换稀疏表示的SAR与可见光图像融合[J]. 图学学报, 2018, 39(2): 201-208.
DOI |
SHENG J J, YANG X Z, DONG Z Y, et al. Fusion of SAR and visible images based on NSST-IHS and sparse representation[J]. Journal of Graphics, 2018, 39(2): 201-208 (in Chinese).
DOI |
|
[3] |
纪峰, 李泽仁, 常霞, 等. 基于PCA和NSCT变换的遥感图像融合方法[J]. 图学学报, 2017, 38(2): 247-252.
DOI |
JI F, LI Z R, CHANG X, et al. Remote sensing image fusion method based on PCA and NSCT transform[J]. Journal of Graphics, 2017, 38(2): 247-252 (in Chinese).
DOI |
|
[4] | ZHANG H, SHEN H F, YUAN Q Q, et al. Multispectral and SAR image fusion based on Laplacian pyramid and sparse representation[J]. Remote Sensing, 2022, 14(4): 870. |
[5] | LI W S, XIAO X Y, XIAO P H, et al. Change detection in multitemporal SAR images based on slow feature analysis combined with improving image fusion strategy[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 3008-3023. |
[6] | YE Y X, ZHANG J C, ZHOU L, et al. Optical and SAR image fusion based on complementary feature decomposition and visual saliency features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5205315. |
[7] | HUANG D S, TANG Y L, WANG Q S. An image fusion method of SAR and multispectral images based on non-subsampled shearlet transform and activity measure[J]. Sensors, 2022, 22(18): 7055. |
[8] | LI X C, JING D, LI Y C, et al. Multi-band and polarization SAR images colorization fusion[J]. Remote Sensing, 2022, 14(16): 4022. |
[9] | LUO J H, ZHOU F, YANG J, et al. DAFCNN: a dual-channel feature extraction and attention feature fusion convolution neural network for SAR image and MS image fusion[J]. Remote Sensing, 2023, 15(12): 3091. |
[10] | CHU B C, CHEN J Y, CHEN J, et al. SDCAFNet: a deep convolutional neural network for land-cover semantic segmentation with the fusion of PolSAR and optical images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 8928-8942. |
[11] | KANG W C, XIANG Y M, WANG F, et al. CFNet: a cross fusion network for joint land cover classification using optical and SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 1562-1574. |
[12] | WANG Y Y, ZHANG W G, CHEN W D, et al. MFFnet: multimodal feature fusion network for synthetic aperture radar and optical image land cover classification[J]. Remote Sensing, 2024, 16(13): 2459. |
[13] | XIA Y, HE W, HUANG Q, et al. SOSSF: landsat-8 image synthesis on the blending of Sentinel-1 and MODIS data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5401619. |
[14] | SUN Y C, YAN K J, LI W Z. CycleGAN-based SAR-optical image fusion for target recognition[J]. Remote Sensing, 2023, 15(23): 5569. |
[15] | SONG B Z, LIU P, LI J, et al. MLFF-GAN: a multilevel feature fusion with GAN for spatiotemporal remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 4410816. |
[16] | SU L J, SUI Y X, YUAN Y. An unmixing-based multi-attention GAN for unsupervised hyperspectral and multispectral image fusion[J]. Remote Sensing, 2023, 15(4): 936. |
[17] | WEI J, ZOU H X, SUN L, et al. CFRWD-GAN for SAR-to-optical image translation[J]. Remote Sensing, 2023, 15(10): 2547. |
[18] | LU W, CHEN S B, TANG J, et al. A robust feature downsampling module for remote-sensing visual tasks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 4404312. |
[19] | SHAKER A, MAAZ M, RASHEED H, et al. SwiftFormer: efficient additive attention for transformer-based real-time mobile vision applications[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 17379-17390. |
[20] | YANG D P, PENG B, AL-HUDA Z, et al. An overview of edge and object contour detection[J]. Neurocomputing, 2022, 488: 470-493. |
[21] | LI H, WU X J, KITTLER J. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. |
[22] | LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]// The 13th European Conference on Computer Vision. Cham: Springer, 2014: 740-755. |
[23] | XIANG Y M, TAO R S, WANG F, et al. Automatic registration of optical and SAR images via improved phase congruency model[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 5847-5861. |
[24] | HUANG M Y, XU Y, QIAN L X, et al. The QXS-SAROPT dataset for deep learning in SAR-optical data fusion[EB/OL]. [2024-05-29]. https://arxiv.org/abs/2103.08259. |
[25] | LI H, WU X J, DURRANI T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656. |
[26] | LI H, WU X J. CrossFuse: a novel cross attention mechanism based infrared and visible image fusion approach[J]. Information Fusion, 2024, 103: 102147. |
[27] | XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. |
[28] | WANG Z S, SHAO W Y, CHEN Y L, et al. A cross-scale iterative attentional adversarial fusion network for infrared and visible images[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(8): 3677-3688. |
[1] | GUO Yecai, HU Xiaowei, AMITAVE Saha, MAO Xiangnan. Multiscale dense interactive attention residual real image denoising network [J]. Journal of Graphics, 2025, 46(2): 279-287. |
[2] | ZHAI Yongjie, WANG Luyao, ZHAO Xiaoyu, HU Zhedong, WANG Qianming, WANG Yaru. Multi-fitting detection for transmission lines based on a cascade query-position relationship method [J]. Journal of Graphics, 2025, 46(2): 288-299. |
[3] | ZHANG Tiansheng, ZHU Minfeng, REN Yiwen, WANG Chenhan, ZHANG Lidong, ZHANG Wei, CHEN Wei. BPA-SAM: box prompt augmented SAM for traditional Chinese realistic painting [J]. Journal of Graphics, 2025, 46(2): 322-331. |
[4] | SUN Heyi, LI Yixiao, TIAN Xi, ZHANG Songhai. Image to 3D vase generation technology combining procedural content generation and diffusion models [J]. Journal of Graphics, 2025, 46(2): 332-344. |
[5] | CHEN Ruiqi, LIU Xiaofei, WAN Feng, HOU Peng, SHEN Jinyi. Simulation and prediction method of satellite solar wing deployment test driven by digital twin [J]. Journal of Graphics, 2025, 46(2): 449-458. |
[6] | WANG Yan, ZHANG Muyu, LIU Xiuzhen. Visual interactive meaning evaluation method of movie posters based on deep learning [J]. Journal of Graphics, 2025, 46(1): 221-232. |
[7] | WANG Yang, MA Chang, HU Ming, SUN Tao, RAO Yuan, YUAN Zhenyu. Lightweight wild bat detection method based on multi-scale feature fusion [J]. Journal of Graphics, 2025, 46(1): 70-80. |
[8] | LI Qiong, KAO Yueying, ZHANG Ying, XU Pei. Review on object detection in UAV aerial images [J]. Journal of Graphics, 2024, 45(6): 1145-1164. |
[9] | LIU Canfeng, SUN Hao, DONG Hui. Molecular amplification time series prediction research combining Transformer with Kolmogorov-Arnold network [J]. Journal of Graphics, 2024, 45(6): 1256-1265. |
[10] | SONG Sicheng, CHEN Chen, LI Chenhui, WANG Changbo. Spatiotemporal data visualization based on density map multi-target tracking [J]. Journal of Graphics, 2024, 45(6): 1289-1300. |
[11] | WANG Zongji, LIU Yunfei, LU Feng. Cloud Sphere: a 3D shape representation method via progressive deformation [J]. Journal of Graphics, 2024, 45(6): 1375-1388. |
[12] | XU Dandan, CUI Yong, ZHANG Shiqian, LIU Yucong, LIN Yusong. Optimizing the visual effects of 3D rendering in medical imaging: a technical review [J]. Journal of Graphics, 2024, 45(5): 879-891. |
[13] | HU Fengkuo, YE Lan, TAN Xianfeng, ZHANG Qinzhan, HU Zhixin, FANG Qing, WANG Lei, MAN Xiaofeng. A refined YOLOv8-based algorithm for lightweight pavement disease detection [J]. Journal of Graphics, 2024, 45(5): 892-900. |
[14] | LIU Yiyan, HAO Tingnan, HE Chen, CHANG Yingjie. Photovoltaic cell surface defect detection based on DBBR-YOLO [J]. Journal of Graphics, 2024, 45(5): 913-921. |
[15] | ZHAI Yongjie, LI Jiawei, CHEN Nianhao, WANG Qianming, WANG Xinying. The vehicle parts detection method enhanced with Transformer integration [J]. Journal of Graphics, 2024, 45(5): 930-940. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||