[1] |
曾志超, 徐玥, 王景玉, 等. 基于SOE-YOLO轻量化的水面目标检测算法[EB/OL]. [2024-04-25]. http://kns.cnki.net/kcms/detail/10.1034.T.20240417.1457.002.html.
|
|
ZENG Z C, XU Y, WANG J Y, et al. A water surface target detection algorithm based on SOE-YOLO lightweight network[EB/OL]. [2024-04-25]. http://kns.cnki.net/kcms/detail/10.1034.T.20240417.1457.002.html (in Chinese).
|
[2] |
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2014: 580-587.
|
[3] |
GIRSHICK R. Fast R-CNN[C]// 2015 IEEE International Conference on Computer Vision. New York: IEEE Press, 2015: 1440-1448.
|
[4] |
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
DOI
PMID
|
[5] |
KANG D, BENIPAL S S, GOPAL D L, et al. Hybrid pixel-level concrete crack segmentation and quantification across complex backgrounds using deep learning[J]. Automation in Construction, 2020, 118: 103291.
|
[6] |
YAMAGUCHI T, MIZUTANI T. Quantitative road crack evaluation by a U-Net architecture using smartphone images and Lidar data[J]. Computer-Aided Civil and Infrastructure Engineering, 2024, 39(7): 963-982.
|
[7] |
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 779-788.
|
[8] |
REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6517-6525.
|
[9] |
REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL]. [2024-04-25]. http://arxiv.org/abs/1804.02767.
|
[10] |
BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[EB/OL]. [2024-04-25]. http://arxiv.org/abs/2004.10934.
|
[11] |
LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector[M]//Computer Vision-ECCV 2016. Cham: Springer International Publishing, 2016: 21-37.
|
[12] |
WANG N N, SHANG L H, SONG X T. A transformer- optimized deep learning network for road damage detection and tracking[J]. Sensors, 2023, 23(17): 7395.
|
[13] |
XIANG W N, WANG H C, XU Y, et al. Road disease detection algorithm based on YOLOv5s-DSG[J]. Journal of Real-Time Image Processing, 2023, 20(3): 56.
|
[14] |
WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 7464-7475.
|
[15] |
崔克彬, 焦静颐. 基于MCB-FAH-YOLOv8的钢材表面缺陷检测算法[J]. 图学学报, 2024, 45(1): 112-125.
DOI
|
|
CUI K B, JIAO J Y. Steel surface defect detection algorithm based on MCB-FAH-YOLOv8[J]. Journal of Graphics, 2024, 45(1): 112-125 (in Chinese).
DOI
|
[16] |
DAI J F, QI H Z, XIONG Y W, et al. Deformable convolutional networks[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 764-773.
|
[17] |
ZHU X Z, HU H, LIN S, et al. Deformable ConvNets V2: more deformable, better results[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 9300-9308.
|
[18] |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[EB/OL]. [2024-01-12]. https://arxiv.org/abs/1706.03762.
|
[19] |
HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2024-01-12]. http://arxiv.org/abs/1704.04861.
|
[20] |
HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 7132-7141.
|
[21] |
WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[M]//Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 3-19.
|