Journal of Graphics ›› 2026, Vol. 47 ›› Issue (1): 99-110.DOI: 10.11996/JG.j.2095-302X.2026010099
• Image Processing and Computer Vision • Previous Articles Next Articles
YANG Biao, WANG Xue, GUAN Zheng(
), LONG Ping
Received:2025-06-16
Accepted:2025-08-18
Online:2026-02-28
Published:2026-03-16
Contact:
GUAN Zheng
Supported by:CLC Number:
YANG Biao, WANG Xue, GUAN Zheng, LONG Ping. BSD-YOLO: a small target vehicle detection method based on dynamic sparse attention and adaptive detection head[J]. Journal of Graphics, 2026, 47(1): 99-110.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2026010099
| Top-k | Precision | Recall | mAP@ 0.50 | mAP@ 0.50:0.95 | GFLOPs |
|---|---|---|---|---|---|
| 1 | 0.563 24 | 0.619 81 | 0.599 37 | 0.416 07 | 11.1 |
| 2 | 0.623 38 | 0.593 59 | 0.602 03 | 0.445 74 | 11.2 |
| 3 | 0.669 23 | 0.571 61 | 0.623 59 | 0.448 92 | 11.3 |
| 4 | 0.673 39 | 0.613 42 | 0.642 58 | 0.444 12 | 11.3 |
| 5 | 0.654 76 | 0.624 12 | 0.631 84 | 0.470 34 | 11.4 |
| 6 | 0.632 35 | 0.625 91 | 0.636 18 | 0.457 68 | 11.5 |
| 7 | 0.615 29 | 0.628 39 | 0.633 24 | 0.457 65 | 11.5 |
| 8 | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 11.6 |
Table 1 Sensitivity analysis of k value to ReBiAttention
| Top-k | Precision | Recall | mAP@ 0.50 | mAP@ 0.50:0.95 | GFLOPs |
|---|---|---|---|---|---|
| 1 | 0.563 24 | 0.619 81 | 0.599 37 | 0.416 07 | 11.1 |
| 2 | 0.623 38 | 0.593 59 | 0.602 03 | 0.445 74 | 11.2 |
| 3 | 0.669 23 | 0.571 61 | 0.623 59 | 0.448 92 | 11.3 |
| 4 | 0.673 39 | 0.613 42 | 0.642 58 | 0.444 12 | 11.3 |
| 5 | 0.654 76 | 0.624 12 | 0.631 84 | 0.470 34 | 11.4 |
| 6 | 0.632 35 | 0.625 91 | 0.636 18 | 0.457 68 | 11.5 |
| 7 | 0.615 29 | 0.628 39 | 0.633 24 | 0.457 65 | 11.5 |
| 8 | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 11.6 |
| 编号 | 参数 | 设置 |
|---|---|---|
| 1 | epochs | 300 |
| 2 | Batch | 8 |
| 3 | imgsz | 640 |
| 4 | workers | 4 |
| 5 | optimizer | SGD |
| 6 | close_mosaic | 0 |
| 7 | patience | 50 |
| 8 | warmup_epochs | 3.0 |
| 9 | warmup_momentum | 0.8 |
| 10 | lr0 | 0.01 |
| 11 | lrf | 0.01 |
| 12 | mosaic | 1.0 |
| 13 | weight_decay | 0.000 5 |
Table 2 Training parameter settings
| 编号 | 参数 | 设置 |
|---|---|---|
| 1 | epochs | 300 |
| 2 | Batch | 8 |
| 3 | imgsz | 640 |
| 4 | workers | 4 |
| 5 | optimizer | SGD |
| 6 | close_mosaic | 0 |
| 7 | patience | 50 |
| 8 | warmup_epochs | 3.0 |
| 9 | warmup_momentum | 0.8 |
| 10 | lr0 | 0.01 |
| 11 | lrf | 0.01 |
| 12 | mosaic | 1.0 |
| 13 | weight_decay | 0.000 5 |
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | GFLOPs | Params/M |
|---|---|---|---|---|---|---|
| YOLOv8n | 0.60881 | 0.598 17 | 0.589 04 | 0.431 27 | 8.1 | 3.00 |
| +C2f-ReBiAttention | 0.636 09 | 0.576 85 | 0.592 03 | 0.399 98 | 8.1 | 2.95 |
| +C3-ReBiAttention | 0.639 66 | 0.486 80 | 0.571 48 | 0.417 49 | 8.0 | 2.93 |
| +CPN-ReBiAttention | 0.646 37 | 0.563 50 | 0.598 16 | 0.424 37 | 8.0 | 2.93 |
| +CSC-ReBiAttention | 0.642 02 | 0.566 64 | 0.584 40 | 0.409 42 | 8.0 | 2.94 |
| +ReBiAttention | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 11.6 | 3.45 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 7.9 | 2.87 |
Table 3 Comparison of the results of adding different ReBiAttention modules
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | GFLOPs | Params/M |
|---|---|---|---|---|---|---|
| YOLOv8n | 0.60881 | 0.598 17 | 0.589 04 | 0.431 27 | 8.1 | 3.00 |
| +C2f-ReBiAttention | 0.636 09 | 0.576 85 | 0.592 03 | 0.399 98 | 8.1 | 2.95 |
| +C3-ReBiAttention | 0.639 66 | 0.486 80 | 0.571 48 | 0.417 49 | 8.0 | 2.93 |
| +CPN-ReBiAttention | 0.646 37 | 0.563 50 | 0.598 16 | 0.424 37 | 8.0 | 2.93 |
| +CSC-ReBiAttention | 0.642 02 | 0.566 64 | 0.584 40 | 0.409 42 | 8.0 | 2.94 |
| +ReBiAttention | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 11.6 | 3.45 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 7.9 | 2.87 |
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 |
| YOLOv12 | 0.713 72 | 0.514 93 | 0.603 74 | 0.433 87 | 2.51 |
| YOLOv6 | 0.570 75 | 0.543 44 | 0.556 63 | 0.413 99 | 4.23 |
| YOLOv5 | 0.648 75 | 0.561 17 | 0.588 45 | 0.406 21 | 2.50 |
| YOLOv3 | 0.679 22 | 0.578 32 | 0.607 95 | 0.453 78 | 103.66 |
| YOLOv8n+ReBiAttention | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 3.45 |
| YOLOv8n+SN | 0.613 18 | 0.599 30 | 0.601 96 | 0.426 42 | 2.80 |
| YOLOv8+Dyhead-sma1l | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 2.87 |
Table 4 Comparison of network structures
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 |
| YOLOv12 | 0.713 72 | 0.514 93 | 0.603 74 | 0.433 87 | 2.51 |
| YOLOv6 | 0.570 75 | 0.543 44 | 0.556 63 | 0.413 99 | 4.23 |
| YOLOv5 | 0.648 75 | 0.561 17 | 0.588 45 | 0.406 21 | 2.50 |
| YOLOv3 | 0.679 22 | 0.578 32 | 0.607 95 | 0.453 78 | 103.66 |
| YOLOv8n+ReBiAttention | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 3.45 |
| YOLOv8n+SN | 0.613 18 | 0.599 30 | 0.601 96 | 0.426 42 | 2.80 |
| YOLOv8+Dyhead-sma1l | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 2.87 |
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 |
| +AFPN[ | 0.580 92 | 0.520 42 | 0.541 67 | 0.391 88 | 2.11 |
| +AFPN-small | 0.630 40 | 0.576 21 | 0.588 82 | 0.431 25 | 3.67 |
| +AFPN-large | 0.637 76 | 0.492 45 | 0.548 28 | 0.384 95 | 2.11 |
| +ASFFHead | 0.636 27 | 0.567 77 | 0.585 66 | 0.416 71 | 4.38 |
| +Dyhead-base | 0.740 31 | 0.532 93 | 0.599 11 | 0.429 29 | 4.75 |
| +Dyhead-large | 0.650 17 | 0.579 04 | 0.604 89 | 0.420 23 | 13.10 |
| +Dyhead-sma1l | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 2.87 |
Table 5 Comparison of various types of detection heads
| 实验方法 | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 |
| +AFPN[ | 0.580 92 | 0.520 42 | 0.541 67 | 0.391 88 | 2.11 |
| +AFPN-small | 0.630 40 | 0.576 21 | 0.588 82 | 0.431 25 | 3.67 |
| +AFPN-large | 0.637 76 | 0.492 45 | 0.548 28 | 0.384 95 | 2.11 |
| +ASFFHead | 0.636 27 | 0.567 77 | 0.585 66 | 0.416 71 | 4.38 |
| +Dyhead-base | 0.740 31 | 0.532 93 | 0.599 11 | 0.429 29 | 4.75 |
| +Dyhead-large | 0.650 17 | 0.579 04 | 0.604 89 | 0.420 23 | 13.10 |
| +Dyhead-sma1l | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 |
| Ours | 0.696 20 | 0.615 02 | 0.661 59 | 0.468 31 | 2.87 |
| 实验方法 | ReBiAttention | SN | Dyhead | ShapeIoU | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 | ||||
| A | √ | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 3.45 | |||
| B | √ | 0.613 18 | 0.599 30 | 0.601 96 | 0.426 42 | 2.80 | |||
| C | √ | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 | |||
| D | √ | √ | 0.680 19 | 0.601 61 | 0.634 58 | 0.446 55 | 3.24 | ||
| E | √ | √ | 0.737 35 | 0.561 93 | 0.622 49 | 0.447 88 | 3.01 | ||
| F | √ | √ | 0.663 11 | 0.572 28 | 0.626 18 | 0.445 79 | 2.40 | ||
| G | √ | √ | √ | 0.710 68 | 0.570 31 | 0.636 86 | 0.456 35 | 2.85 | |
| Ours | √ | √ | √ | √ | 0.696 20 | 0.615 02 | 0.661 59 | 0.4683 1 | 2.87 |
Table 6 Ablation experiments
| 实验方法 | ReBiAttention | SN | Dyhead | ShapeIoU | Precision | Recall | mAP@0.50 | mAP@0.50:0.95 | Params/M |
|---|---|---|---|---|---|---|---|---|---|
| YOLOv8n | 0.608 81 | 0.598 17 | 0.589 04 | 0.431 27 | 3.00 | ||||
| A | √ | 0.615 95 | 0.656 82 | 0.648 06 | 0.468 26 | 3.45 | |||
| B | √ | 0.613 18 | 0.599 30 | 0.601 96 | 0.426 42 | 2.80 | |||
| C | √ | 0.735 32 | 0.590 65 | 0.648 83 | 0.475 72 | 2.51 | |||
| D | √ | √ | 0.680 19 | 0.601 61 | 0.634 58 | 0.446 55 | 3.24 | ||
| E | √ | √ | 0.737 35 | 0.561 93 | 0.622 49 | 0.447 88 | 3.01 | ||
| F | √ | √ | 0.663 11 | 0.572 28 | 0.626 18 | 0.445 79 | 2.40 | ||
| G | √ | √ | √ | 0.710 68 | 0.570 31 | 0.636 86 | 0.456 35 | 2.85 | |
| Ours | √ | √ | √ | √ | 0.696 20 | 0.615 02 | 0.661 59 | 0.4683 1 | 2.87 |
| [1] |
火久元, 苏泓瑞, 武泽宇, 等. 基于改进YOLOv8的道路交通小目标车辆检测算法[J]. 计算机工程, 2025, 51(1): 246-257.
DOI |
|
HUO J Y, SU H R, WU Z Y, et al. Road traffic small target vehicle detection algorithm based on improved YOLOv8[J]. Computer Engineering, 2025, 51(1): 246-257 (in Chinese).
DOI |
|
| [2] |
NAVIA-VAZQUEZ A, GUTIERREZ-GONZALEZ D, PARRADO-HERNÁNDEZ E, et al. Distributed support vector machines[J]. IEEE Transactions on Neural Networks, 2006, 17(4): 1091-1097.
DOI URL |
| [3] | ARREOLA L, GUDIÑO G, FLORES G. Object recognition and tracking using Haar-like Features Cascade Classifiers: application to a quad-rotor UAV[C]// 2022 8th International Conference on Control, Decision and Information Technologies. New York: IEEE Press, 2022: 45-50. |
| [4] | 杜铨熠. 基于改进YOLOv8的无人机航拍交通小目标检测算法研究[D]. 大连: 大连交通大学, 2025. |
| DU Q Y. Research on aerial traffic small target detection algorithm in UAV based on improved YOLOv8[D]. Dalian: Dalian Jiaotong University, 2025 (in Chinese). | |
| [5] | 鞠默然, 罗海波, 王仲博, 等. 改进的YOLO V3算法及其在小目标检测中的应用[J]. 光学学报, 2019, 39(7): 0715004. |
|
JU M R, LUO H B, WANG Z B, et al. Improved YOLO V3 algorithm and its application in small target detection[J]. Acta Optica Sinica, 2019, 39(7): 0715004 (in Chinese).
DOI URL |
|
| [6] | 濮志远, 罗素云. 复杂交通场景下的目标检测方法[J]. 信息与控制, 2025, 54(4): 632-643. |
| PU Z Y, LUO S Y. Object detection method in complex traffic scenarios[J]. Information and Control, 2025, 54(4): 632-643 (in Chinese). | |
| [7] | 孙旭辉, 官铮, 王学. 红外与可见光图像分组融合的视觉 Transformer[J]. 中国图象图形学报, 2023, 28(1): 166-178. |
|
SUN X H, GUAN Z, WANG X. Vision transformer for fusing infrared and visible images in groups[J]. Journal of Image and Graphics, 2023, 28(1): 166-178 (in Chinese).
DOI URL |
|
| [8] | ZHU L, WANG X J, KE Z H, et al. BiFormer: vision transformer with bi-level routing attention[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 10323-10333. |
| [9] | 黄崇庆, 徐慧英, 张晓雷, 等. BGR-YOLO: 基于YOLOv8改进的交通场景下目标检测算法[EB/OL]. (2025-04-08) [2025-05-29]. https://link.cnki.net/urlid/43.1258.TP.20250408.1455.002. |
| HUANG C Q, XU H Y, ZHANG X L, et al. BGR-YOLO:an improved object detection algorithm under traffic scenarios based on YOLOv8[EB/OL]. (2025-04-08) [2025-05-29]. https://link.cnki.net/urlid/43.1258.TP.20250408.1455.002. (in Chinese). | |
| [10] | 刘熠龙, 张自立, 冯冀宁. 基于UAV-YOLO的无人机航拍图像轻量化目标检测算法[J]. 现代电子技术, 2025, 48(15): 51-56. |
| LIU Y L, ZHANG Z L, FENG J N. UAV-YOLO-based lightweight object detection algorithm for UAV aerial images[J]. Modern Electronics Technique, 2025, 48(15): 51-56 (in Chinese). | |
| [11] | HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2025-04-16]. https://arxiv.org/abs/1704.04861. |
| [12] |
LUO Y H, CAO X, ZHANG J T, et al. CE-FPN: enhancing channel information for object detection[J]. Multimedia Tools and Applications, 2022, 81(21): 30685-30704.
DOI |
| [13] |
LI H L, LI J, WEI H B, et al. Slim-neck by GSConv: a lightweight-design for real-time detector architectures[J]. Journal of Real-Time Image Processing, 2024, 21(3): 62.
DOI |
| [14] | ZHENG W, TANG W L, JIANG L, et al. SE-SSD: self- ensembling single-stage object detector from point cloud[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14489-14498. |
| [15] |
MIAO T, ZENG H C, YANG W, et al. An improved lightweight RetinaNet for ship detection in SAR images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 4667-4679.
DOI URL |
| [16] | MAITY M, BANERJEE S, CHAUDHURI S S. Faster R-CNN and YOLO based vehicle detection: a survey[C]// The 5th International Conference on Computing Methodologies and Communication. New York: IEEE Press, 2021: 1442-1447. |
| [17] |
CHAI B S, NIE X, ZHOU Q F, et al. Enhanced cascade R-CNN for multiscale object detection in dense scenes from SAR images[J]. IEEE Sensors Journal, 2024, 24(12): 20143-20153.
DOI URL |
| [18] | REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 779-788. |
| [19] | REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6517-6525. |
| [20] | LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 936-944. |
| [21] | BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4:optimal speed and accuracy of object detection[EB/OL]. [2025-04-16]. https://arxiv.org/abs/2004.10934. |
| [22] | NELSON J, SOLAWETZ J. YOLOv5 is here: state-of-the-art object detection at 140 FPS[EB/OL]. (2020-06-10) [2025- 04-16]. https://blog.roboflow.com/yolov5-is-here/. |
| [23] | LI C Y, LI L L, JIANG H L, et al. YOLOv6:a single-stage object detection framework for industrial applications[EB/OL]. [2025-04-16]. https://arxiv.org/abs/2209.02976. |
| [24] | WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7:trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[EB/OL]. [2025-04-16]. https://arxiv.org/pdf/2207.02696.pdf. |
| [25] | Ultralytics. YOLOv8(8.0)[EB/OL]. [2025-04-16]. https://github.com/ultralytics/ultralytics. |
| [26] | 杨锦辉, 李鸿, 杜芸彦, 等. 基于改进YOLOv5s的轻量化目标检测算法[J]. 电光与控制, 2023, 30(2): 24-30. |
| YANG J H, LI H, DU Y Y, et al. A lightweight object detection algorithm based on improved YOLOv5s[J]. Electronics Optics & Control, 2023, 30(2): 24-30 (in Chinese). | |
| [27] |
YU B Y, LI Z X, CAO Y, et al. YOLO-MPAM: efficient real-time neural networks based on multi-channel feature fusion[J]. Expert Systems with Applications, 2024, 252: 124282.
DOI URL |
| [28] | DAI X Y, CHEN Y P, XIAO B, et al. Dynamic head: unifying object detection heads with attentions[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7369-7378. |
| [29] | TIAN Y J, YE Q X, DOERMANN D. YOLOv12:attention-centric real-time object detectors[EB/OL]. [2025- 04-16]. https://arxiv.org/abs/2502.12524. |
| [30] |
WEN L Y, DU D W, CAI Z W, et al. UA-DETRAC: a new benchmark and protocol for multi-object detection and tracking[J]. Computer Vision and Image Understanding, 2020, 193: 102907.
DOI URL |
| [31] | YANG G Y, LEI J, ZHU Z K, et al. AFPN: asymptotic feature pyramid network for object detection[C]// 2023 IEEE International Conference on Systems, Man, and Cybernetics). New York: IEEE Press, 2023: 2184-2189. |
| [1] | DONG Wenyi, YANG Weidong, TANG Binghui, WANG Qi, XIAO Hongyu. Review of deep learning based methods for detecting focal liver lesions [J]. Journal of Graphics, 2026, 47(1): 1-16. |
| [2] | ZHAO Fuqun, HAO Hanzhu, YU Jiale. A point cloud classification and segmentation algorithm based on lightweight networks and weighted RF [J]. Journal of Graphics, 2026, 47(1): 143-151. |
| [3] | ZHAI Yongjie, WANG Zixuan, ZHANG Zhenqi, ZHOU Xunqi, WANG Qianming. A vehicle damage classification model incorporating dual attention and weighted dynamic convolution [J]. Journal of Graphics, 2026, 47(1): 17-28. |
| [4] | PAN Yuxuan, JIN Rui, LIU Yu, ZHANG Lin. Generative model based unsupervised multi-view stereo network [J]. Journal of Graphics, 2026, 47(1): 29-38. |
| [5] | JIU Mingyuan, WU Guowei, SONG Xuguang, LI Shupan, XU Mingliang. Image classification method based on uncertainty-driven smart reinforcement active learning [J]. Journal of Graphics, 2026, 47(1): 47-56. |
| [6] | LI Ye, JIA Junyang, HUANG Guan, LI Yujie, QI Wenting, LIU Yan. A lightweight image flare removal method for night vision assisted driving [J]. Journal of Graphics, 2026, 47(1): 57-67. |
| [7] | JU Chen, DING Jiaxin, WANG Zexing, LI Guangzhao, GUAN Zhenxiang, ZHANG Changyou. Graph neural network-based method for approximating finite element shape functions [J]. Journal of Graphics, 2025, 46(6): 1161-1171. |
| [8] | YI Bin, ZHANG Libin, LIU Danying, TANG Jun, FANG Junjun, LI Wenqi. Prediction model of laser drilling ventilation rate in cigarette manufacturing process based on AMTA-Net [J]. Journal of Graphics, 2025, 46(6): 1224-1232. |
| [9] | BO Wen, JU Chen, LIU Weiqing, ZHANG Yan, HU Jingjing, CHENG Jinghan, ZHANG Changyou. Degradation-driven temporal modeling method for equipment maintenance interval prediction [J]. Journal of Graphics, 2025, 46(6): 1233-1246. |
| [10] | ZHAO Zhenbing, Ouyang Wenbin, FENG Shuo, LI Haopeng, MA Jun. A thermal image detection method for insulators incorporating within-class sparse prior knowledge and improved YOLOv8 [J]. Journal of Graphics, 2025, 46(6): 1247-1256. |
| [11] | HE Mengmeng, ZHANG Xiaoyan, LI Hongan. Lightweight skin lesion image segmentation network based on Mamba structure [J]. Journal of Graphics, 2025, 46(6): 1257-1266. |
| [12] | LI Xingchen, LI Zongmin, YANG Chaozhi. Test-time adaptation algorithm based on trusted pseudo-label fine-tuning [J]. Journal of Graphics, 2025, 46(6): 1292-1303. |
| [13] | FAN Lexiang, MA Ji, ZHOU Dengwen. Lightweight blind super-resolution network based on degradation separation [J]. Journal of Graphics, 2025, 46(6): 1304-1315. |
| [14] | WANG Haihan. Multi object detection method for surface defects of steel arch towers based on YOLOv8-OSRA [J]. Journal of Graphics, 2025, 46(6): 1327-1336. |
| [15] | ZHU Hongmiao, ZHONG Guojie, ZHANG Yanci. Semantic segmentation of small-scale point clouds based on integration of mean shift and deep learning [J]. Journal of Graphics, 2025, 46(5): 998-1009. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||