Journal of Graphics ›› 2024, Vol. 45 ›› Issue (4): 659-669.DOI: 10.11996/JG.j.2095-302X.2024040659
• Image Processing and Computer Vision • Previous Articles Next Articles
ZHANG Xinyu1,2(
), ZHANG Jiayi1,2,3, GAO Xin2,3(
)
Received:2024-03-08
Accepted:2024-05-08
Online:2024-08-31
Published:2024-09-02
Contact:
GAO Xin
About author:First author contact:ZHANG Xinyu (1998-), master student. His main research interest covers surgical navigation. E-mail:798091761@qq.com
Supported by:CLC Number:
ZHANG Xinyu, ZHANG Jiayi, GAO Xin. ASC-Net: fast segmentation network for surgical instruments and organs in laparoscopic video[J]. Journal of Graphics, 2024, 45(4): 659-669.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2024040659
Fig. 2 Attention Perceptron Block ((a) The architecture of all; (b) The architecture of the Multi Head Channel Attention; (c) The architecture of the Multilayer Conv Preceptron)
Fig. 3 Spatial Channel Block ((a) The architecture of all; (b) The architecture of the atrous spatial paralleling block; (c) The architecture of the channel fusion block)
| 模型 | ASPB | CFB | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 70.12 | 65.47 |
| 基线模型 | × | √ | 68.28 | 58.13 |
| 基线模型 | √ | √ | 71.39 | 66.59 |
Table 1 Validation experiment results for CFB and ASPB of SCB on EndoVis2018
| 模型 | ASPB | CFB | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 70.12 | 65.47 |
| 基线模型 | × | √ | 68.28 | 58.13 |
| 基线模型 | √ | √ | 71.39 | 66.59 |
| 模型 | mDice | mIoU |
|---|---|---|
| 基线模型 | 48.86 | 39.86 |
| 基线模型+SE | 67.94 | 54.63 |
| 基线模型+ASPP | 70.04 | 58.46 |
| 基线模型+DAM | 70.21 | 62.88 |
| 基线模型+SCB | 71.39 | 66.59 |
Table 2 Comparative experiment results for SCB on EndoVis2018/%
| 模型 | mDice | mIoU |
|---|---|---|
| 基线模型 | 48.86 | 39.86 |
| 基线模型+SE | 67.94 | 54.63 |
| 基线模型+ASPP | 70.04 | 58.46 |
| 基线模型+DAM | 70.21 | 62.88 |
| 基线模型+SCB | 71.39 | 66.59 |
| 模型 | MHCA | MCP | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 71.21 | 62.48 |
| 基线模型 | × | √ | 52.54 | 43.92 |
| 基线模型 | √ | √ | 73.31 | 65.15 |
Table 3 Validation experiment results for MHCA and MCP of APB on EndoVis2018
| 模型 | MHCA | MCP | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 71.21 | 62.48 |
| 基线模型 | × | √ | 52.54 | 43.92 |
| 基线模型 | √ | √ | 73.31 | 65.15 |
| 模型 | APB | SCB | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 73.31 | 65.15 |
| 基线模型 | × | √ | 71.39 | 66.59 |
| 基线模型 | √ | √ | 90.64 | 86.40 |
Table 4 Ablation experiment results for SCB and APB on EndoVis2018
| 模型 | APB | SCB | mDice/% | mIoU/% |
|---|---|---|---|---|
| 基线模型 | × | × | 48.86 | 39.86 |
| 基线模型 | √ | × | 73.31 | 65.15 |
| 基线模型 | × | √ | 71.39 | 66.59 |
| 基线模型 | √ | √ | 90.64 | 86.40 |
| 模型 | 评价指标 | 平均值/% | 手术器械/% | 脏器/% | mIT/ms (GPU) | FLOPs/G | Parameter/M |
|---|---|---|---|---|---|---|---|
| UNet | mDice | 43.95 | 35.55 | 63.54 | 13.84 | 61.90 | 31.01 |
| mIoU | 34.83 | 27.84 | 51.12 | ||||
| TernausNet | mDice | 48.86 | 38.34 | 73.42 | 26.58 | 24.76 | 32.15 |
| mIoU | 39.86 | 29.47 | 64.11 | ||||
| RAUNet | mDice | 68.18 | 58.31 | 91.21 | 37.83 | 31.61 | 22.14 |
| mIoU | 59.16 | 47.74 | 85.80 | ||||
| BARNet | mDice | 70.10 | 62.01 | 89.01 | 52.52 | - | - |
| mIoU | 59.92 | 50.47 | 81.97 | ||||
| DeepLabv3+ | mDice | 70.69 | 62.39 | 90.05 | 39.47 | 35.59 | 21.95 |
| mIoU | 60.94 | 51.38 | 83.26 | ||||
| MFC | mDice | 56.40 | 44.84 | 83.35 | 78.63 | 149.84 | 49.89 |
| mIoU | 50.04 | 38.19 | 77.68 | ||||
| SRBNet | mDice | 71.90 | 64.20 | 89.86 | 38.40 | - | - |
| mIoU | 62.19 | 53.19 | 83.19 | ||||
| ASC-Net | mDice | 90.64 | 89.87 | 92.42 | 16.73 | 17.35 | 32.94 |
| mIoU | 86.40 | 85.70 | 88.05 |
Table 5 Segmentation performance of each methods on EndoVis2018
| 模型 | 评价指标 | 平均值/% | 手术器械/% | 脏器/% | mIT/ms (GPU) | FLOPs/G | Parameter/M |
|---|---|---|---|---|---|---|---|
| UNet | mDice | 43.95 | 35.55 | 63.54 | 13.84 | 61.90 | 31.01 |
| mIoU | 34.83 | 27.84 | 51.12 | ||||
| TernausNet | mDice | 48.86 | 38.34 | 73.42 | 26.58 | 24.76 | 32.15 |
| mIoU | 39.86 | 29.47 | 64.11 | ||||
| RAUNet | mDice | 68.18 | 58.31 | 91.21 | 37.83 | 31.61 | 22.14 |
| mIoU | 59.16 | 47.74 | 85.80 | ||||
| BARNet | mDice | 70.10 | 62.01 | 89.01 | 52.52 | - | - |
| mIoU | 59.92 | 50.47 | 81.97 | ||||
| DeepLabv3+ | mDice | 70.69 | 62.39 | 90.05 | 39.47 | 35.59 | 21.95 |
| mIoU | 60.94 | 51.38 | 83.26 | ||||
| MFC | mDice | 56.40 | 44.84 | 83.35 | 78.63 | 149.84 | 49.89 |
| mIoU | 50.04 | 38.19 | 77.68 | ||||
| SRBNet | mDice | 71.90 | 64.20 | 89.86 | 38.40 | - | - |
| mIoU | 62.19 | 53.19 | 83.19 | ||||
| ASC-Net | mDice | 90.64 | 89.87 | 92.42 | 16.73 | 17.35 | 32.94 |
| mIoU | 86.40 | 85.70 | 88.05 |
| 模型 | 评价指标 | 平均值/% | 手术器械/% | 脏器/% | mIT/ms (GPU) | FLOPs/G | Parameter/M |
|---|---|---|---|---|---|---|---|
| UNet | Dice | 74.75 | 85.28 | 64.21 | 11.44 | 48.38 | 31.01 |
| IoU | 69.37 | 83.30 | 55.43 | ||||
| TernausNet | Dice | 80.55 | 87.93 | 73.17 | 23.15 | 18.75 | 32.15 |
| IoU | 76.40 | 85.08 | 67.72 | ||||
| RAUNet | Dice | 81.17 | 88.07 | 74.27 | 34.58 | 26.24 | 22.14 |
| IoU | 77.35 | 83.03 | 71.67 | ||||
| DeepLabv3+ | Dice | 83.03 | 90.49 | 75.56 | 38.37 | 27.66 | 21.95 |
| IoU | 79.97 | 88.01 | 71.93 | ||||
| MFC | Dice | 81.78 | 89.38 | 74.18 | 62.58 | 136.71 | 49.89 |
| IoU | 76.55 | 86.67 | 66.43 | ||||
| ASC-Net | Dice | 93.72 | 96.68 | 90.76 | 16.41 | 14.02 | 32.94 |
| IoU | 89.43 | 93.56 | 85.29 |
Table 6 Segmentation performance of each methods on AutoLaparo
| 模型 | 评价指标 | 平均值/% | 手术器械/% | 脏器/% | mIT/ms (GPU) | FLOPs/G | Parameter/M |
|---|---|---|---|---|---|---|---|
| UNet | Dice | 74.75 | 85.28 | 64.21 | 11.44 | 48.38 | 31.01 |
| IoU | 69.37 | 83.30 | 55.43 | ||||
| TernausNet | Dice | 80.55 | 87.93 | 73.17 | 23.15 | 18.75 | 32.15 |
| IoU | 76.40 | 85.08 | 67.72 | ||||
| RAUNet | Dice | 81.17 | 88.07 | 74.27 | 34.58 | 26.24 | 22.14 |
| IoU | 77.35 | 83.03 | 71.67 | ||||
| DeepLabv3+ | Dice | 83.03 | 90.49 | 75.56 | 38.37 | 27.66 | 21.95 |
| IoU | 79.97 | 88.01 | 71.93 | ||||
| MFC | Dice | 81.78 | 89.38 | 74.18 | 62.58 | 136.71 | 49.89 |
| IoU | 76.55 | 86.67 | 66.43 | ||||
| ASC-Net | Dice | 93.72 | 96.68 | 90.76 | 16.41 | 14.02 | 32.94 |
| IoU | 89.43 | 93.56 | 85.29 |
| 模型 | 评价指标 | 手术器械 | 脏器组织 | 平均值 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 轴部 | 执行端 | 腕部 | 超声探头 | 夹子 | 缝合针 | 缝合线 | 肾实质 | 肾被膜 | 肠 | |||
| UNet | Dice | 93.81 | 57.94 | 60.90 | 35.74 | 0.28 | 0.00 | 0.18 | 76.95 | 28.47 | 85.20 | 43.95 |
| IoU | 88.35 | 40.79 | 43.78 | 21.76 | 0.14 | 0.00 | 0.09 | 62.54 | 16.60 | 74.22 | 34.83 | |
| TernausNet | Dice | 94.53 | 62.38 | 61.64 | 37.51 | 12.07 | 0.00 | 0.24 | 81.14 | 51.83 | 87.29 | 48.86 |
| IoU | 89.86 | 45.57 | 45.63 | 24.41 | 0.68 | 0.00 | 0.14 | 73.29 | 39.87 | 79.16 | 39.86 | |
| RAUNet | Dice | 95.07 | 70.72 | 76.06 | 52.35 | 68.66 | 0.00 | 45.29 | 95.90 | 79.41 | 98.31 | 68.18 |
| IoU | 93.09 | 57.68 | 63.97 | 35.85 | 53.77 | 0.00 | 29.82 | 92.42 | 68.65 | 96.33 | 59.16 | |
| BARNet | Dice | 96.36 | 75.56 | 80.17 | 50.37 | 74.29 | 0.32 | 56.95 | 95.31 | 73.13 | 98.59 | 70.10 |
| IoU | 92.97 | 60.72 | 66.90 | 33.66 | 59.10 | 0.16 | 39.81 | 91.05 | 57.65 | 97.21 | 59.92 | |
| DeepLabv3+ | Dice | 96.67 | 73.58 | 81.43 | 49.79 | 82.54 | 0.00 | 52.73 | 95.07 | 76.44 | 98.64 | 70.69 |
| IoU | 93.56 | 58.20 | 68.68 | 33.14 | 70.27 | 0.00 | 35.80 | 90.06 | 61.87 | 97.31 | 60.94 | |
| MFC | Dice | 96.24 | 72.70 | 80.85 | 49.71 | 14.41 | 0.00 | 0.00 | 89.41 | 74.75 | 85.89 | 56.40 |
| IoU | 93.37 | 58.94 | 68.59 | 34.78 | 11.69 | 0.00 | 0.00 | 88.98 | 60.53 | 83.54 | 50.04 | |
| SRBNet | Dice | 96.86 | 76.64 | 82.37 | 49.62 | 78.83 | 0.00 | 65.09 | 95.22 | 75.15 | 99.25 | 71 .90 |
| IoU | 93.91 | 62.12 | 70.02 | 32.99 | 65.06 | 0.00 | 48.24 | 90.88 | 60.19 | 98.51 | 62.19 | |
| ASC-Net | Dice | 91.58 | 85.32 | 79.45 | 94.45 | 94.04 | 96.61 | 87.66 | 92.33 | 92.53 | 92.40 | 90.64 |
| IoU | 89.12 | 81.85 | 71.67 | 92.35 | 90.49 | 90.24 | 84.14 | 88.72 | 85.66 | 89.77 | 86.40 | |
Table 7 Multiple category segmentation performance of each methods on EndoVis2018
| 模型 | 评价指标 | 手术器械 | 脏器组织 | 平均值 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 轴部 | 执行端 | 腕部 | 超声探头 | 夹子 | 缝合针 | 缝合线 | 肾实质 | 肾被膜 | 肠 | |||
| UNet | Dice | 93.81 | 57.94 | 60.90 | 35.74 | 0.28 | 0.00 | 0.18 | 76.95 | 28.47 | 85.20 | 43.95 |
| IoU | 88.35 | 40.79 | 43.78 | 21.76 | 0.14 | 0.00 | 0.09 | 62.54 | 16.60 | 74.22 | 34.83 | |
| TernausNet | Dice | 94.53 | 62.38 | 61.64 | 37.51 | 12.07 | 0.00 | 0.24 | 81.14 | 51.83 | 87.29 | 48.86 |
| IoU | 89.86 | 45.57 | 45.63 | 24.41 | 0.68 | 0.00 | 0.14 | 73.29 | 39.87 | 79.16 | 39.86 | |
| RAUNet | Dice | 95.07 | 70.72 | 76.06 | 52.35 | 68.66 | 0.00 | 45.29 | 95.90 | 79.41 | 98.31 | 68.18 |
| IoU | 93.09 | 57.68 | 63.97 | 35.85 | 53.77 | 0.00 | 29.82 | 92.42 | 68.65 | 96.33 | 59.16 | |
| BARNet | Dice | 96.36 | 75.56 | 80.17 | 50.37 | 74.29 | 0.32 | 56.95 | 95.31 | 73.13 | 98.59 | 70.10 |
| IoU | 92.97 | 60.72 | 66.90 | 33.66 | 59.10 | 0.16 | 39.81 | 91.05 | 57.65 | 97.21 | 59.92 | |
| DeepLabv3+ | Dice | 96.67 | 73.58 | 81.43 | 49.79 | 82.54 | 0.00 | 52.73 | 95.07 | 76.44 | 98.64 | 70.69 |
| IoU | 93.56 | 58.20 | 68.68 | 33.14 | 70.27 | 0.00 | 35.80 | 90.06 | 61.87 | 97.31 | 60.94 | |
| MFC | Dice | 96.24 | 72.70 | 80.85 | 49.71 | 14.41 | 0.00 | 0.00 | 89.41 | 74.75 | 85.89 | 56.40 |
| IoU | 93.37 | 58.94 | 68.59 | 34.78 | 11.69 | 0.00 | 0.00 | 88.98 | 60.53 | 83.54 | 50.04 | |
| SRBNet | Dice | 96.86 | 76.64 | 82.37 | 49.62 | 78.83 | 0.00 | 65.09 | 95.22 | 75.15 | 99.25 | 71 .90 |
| IoU | 93.91 | 62.12 | 70.02 | 32.99 | 65.06 | 0.00 | 48.24 | 90.88 | 60.19 | 98.51 | 62.19 | |
| ASC-Net | Dice | 91.58 | 85.32 | 79.45 | 94.45 | 94.04 | 96.61 | 87.66 | 92.33 | 92.53 | 92.40 | 90.64 |
| IoU | 89.12 | 81.85 | 71.67 | 92.35 | 90.49 | 90.24 | 84.14 | 88.72 | 85.66 | 89.77 | 86.40 | |
Fig. 4 Segmentation results of each methods on EndoVis2018 ((a) Small scale targets; (b) Blood contamination; (c) Highlight reflection; (d) Specular reflection)
| [1] | 李翠云, 白静, 郑凉. 融合边缘增强注意力机制和U-Net网络的医学图像分割[J]. 图学学报, 2022, 43(2): 273-278. |
|
LI C Y, BAI J, ZHENG L. A U-Net based contour enhanced attention for medical image segmentation[J]. Journal of Graphics, 2022, 43(2): 273-278 (in Chinese).
DOI |
|
| [2] |
张丽媛, 赵海蓉, 何巍, 等. 融合全局-局部注意模块的Mask R-CNN膝关节囊肿检测方法[J]. 图学学报, 2023, 44(6): 1183-1190.
DOI |
| ZHANG L Y, ZHAO H R, HE W, et al. Knee cysts detection algorithm based on Mask R-CNN integrating global-local attention module[J]. Journal of Graphics, 2023, 44(6): 1183-1190 (in Chinese). | |
| [3] | GARCÍA-PERAZA-HERRERA L C, LI W Q, FIDON L, et al. ToolNet: holistically-nested real-time segmentation of robotic surgical tools[C]// 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2017: 5717-5722. |
| [4] | KAMRUL HASAN S M, LINTE C A. U-NetPlus: a modified encoder-decoder U-net architecture for semantic and instance segmentation of surgical instruments from laparoscopic images[C]// 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society. New York: IEEE Press, 2019: 7205-7211. |
| [5] | QIN F B, LIN S, LI Y M, et al. Towards better surgical instrument segmentation in endoscopic vision: multi-angle feature aggregation and contour supervision[J]. IEEE Robotics and Automation Letters, 2020, 5(4): 6639-6646. |
| [6] | YANG L, GU Y G, BIAN G B, et al. An attention-guided network for surgical instrument segmentation from endoscopic images[J]. Computers in Biology and Medicine, 2022, 151(Pt A): 106216. |
| [7] | JIN Y M, CHENG K Y, DOU Q, et al. Incorporating temporal prior from motion flow for instrument segmentation in minimally invasive surgery video[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2019: 440-448. |
| [8] |
KURMANN T, MÁRQUEZ-NEILA P, ALLAN M, et al. Mask then classify: multi-instance segmentation for surgical instruments[J]. International Journal of Computer Assisted Radiology and Surgery, 2021, 16(7): 1227-1236.
DOI PMID |
| [9] | NI Z L, ZHOU X H, WANG G N, et al. SurgiNet: pyramid attention aggregation and class-wise self-distillation for surgical instrument segmentation[J]. Medical Image Analysis, 2022, 76: 102310. |
| [10] |
单芳湄, 王梦文, 李敏. 融合注意力机制的肠道息肉分割多尺度卷积神经网络[J]. 图学学报, 2023, 44(1): 50-58.
DOI |
| SHAN F M, WANG M W, LI M. Multi-scale convolutional neural network incorporating attention mechanism for intestinal polyp segmentation[J]. Journal of Graphics, 2023, 44(1): 50-58 (in Chinese). | |
| [11] |
陆秋, 邵铧泽, 张云磊. 动态平衡多尺度特征融合的结直肠息肉分割[J]. 图学学报, 2023, 44(2): 225-232.
DOI |
|
LU Q, SHAO H Z, ZHANG Y L. Dynamic balanced multi-scale feature fusion for colorectal polyp segmentation[J]. Journal of Graphics, 2023, 44(2): 225-232 (in Chinese).
DOI |
|
| [12] | GIBSON E, ROBU M R, THOMPSON S, et al. Deep residual networks for automatic segmentation of laparoscopic videos of the liver[C]//Medical Imaging 2017:Image-Guided Procedures, Robotic Interventions, and Modeling. Bellingham:SPIE, 2017: 423-428. |
| [13] | NI Z L, BIAN G B, LI Z, et al. Space squeeze reasoning and low-rank bilinear feature fusion for surgical image segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(7): 3209-3217. |
| [14] | ALLAN M, KONDO S, BODENSTEDT S, et al. 2018 robotic scene segmentation challenge[EB/OL]. [2024-01-18]. http://arxiv.org/abs/2001.11190. |
| [15] | HASAN S M K, SIMON R A, LINTE C A. Inpainting surgical occlusion from laparoscopic video sequences for robot-assisted interventions[J]. Journal of Medical Imaging, 2023, 10(4): 045002. |
| [16] | IGLOVIKOV V, SEFERBEKOV S, BUSLAEV A, et al. TernausNetV2: fully convolutional network for instance segmentation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE Press, 2018: 228-2284. |
| [17] | HENDRYCKS D, GIMPEL K. Gaussian error linear units (GELUs)[EB/OL]. [2024-01-18]. http://arxiv.org/abs/1606.08415. |
| [18] | KUMAR R L, KAKARLA J, ISUNURI B V, et al. Multi-class brain tumor classification using residual network and global average pooling[J]. Multimedia Tools and Applications, 2021, 80(9): 13429-13438. |
| [19] | WANG Z Y, LU B, LONG Y H, et al. AutoLaparo: A new dataset of integrated multi-tasks for image-guided surgical automation in laparoscopic hysterectomy[C]// International Conference on Medical Image Computing and Computer- Assisted Intervention. Cham: Springer, 2022: 486-496. |
| [20] | LENG Z Q, TAN M X, LIU C X, et al. PolyLoss: a polynomial expansion perspective of classification loss functions[EB/OL]. [2024-01-18]. http://arxiv.org/abs/2204.12511. |
| [21] | ZHONG Z L, LIN Z Q, BIDART R, et al. Squeeze-and- attention networks for semantic segmentation[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 13062-13071. |
| [22] | CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder- decoder with atrous separable convolution for semantic image segmentation[C]// Computer Vision - ECCV 2018: 15th European Conference. New York: ACM, 2018: 833-851. |
| [23] | FU J, LIU J, TIAN H J, et al. Dual attention network for scene segmentation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 3141-3149. |
| [24] | RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015: 234-241. |
| [25] | NI Z L, BIAN G B, ZHOU X H, et al. RAUNet: residual attention U-net for semantic segmentation of cataract surgical instruments[C]// International Conference on Neural Information Processing. Cham: Springer, 2019: 139-149. |
| [26] | NI Z L, BIAN G B, WANG G N, et al. BARNet: bilinear attention network with adaptive receptive fields for surgical instrument segmentation[EB/OL]. [2024-01-18]. http://arxiv.org/abs/2001.07093. |
| [27] | ZHAO X K, HAYASHI Y, ODA M, et al. Masked frequency consistency for domain-adaptive semantic segmentation of laparoscopic images[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2023: 663-673. |
| [1] | LIANG Li-ming, LEI Kun, ZHAN Tao, ZHOU Long-song. Feature-adaptive filtering for retinopathy grading [J]. Journal of Graphics, 2022, 43(5): 815-824. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||