图学学报 ›› 2025, Vol. 46 ›› Issue (4): 763-774.DOI: 10.11996/JG.j.2095-302X.2025040763
汪子宇1,2,3(), 曹维维1,2, 曹玉柱1,2, 刘猛4, 陈俊5,6, 刘兆邦1,2, 郑健1,2,3(
)
收稿日期:
2024-12-31
修回日期:
2025-02-26
出版日期:
2025-08-30
发布日期:
2025-08-11
通讯作者:
郑健(1984-),男,研究员,博士。主要研究方向为医学图像分析。E-mail:zhengj@sibet.ac.cn第一作者:
汪子宇(2000-),男,硕士研究生。主要研究方向为医学图像分割。E-mail:wangziyu_0731@mail.ustc.edu.cn
基金资助:
WANG Ziyu1,2,3(), CAO Weiwei1,2, CAO Yuzhu1,2, LIU Meng4, CHEN Jun5,6, LIU Zhaobang1,2, ZHENG Jian1,2,3(
)
Received:
2024-12-31
Revised:
2025-02-26
Published:
2025-08-30
Online:
2025-08-11
First author:
WANG Ziyu (2000-), master student. His main research interest covers medical image segmentation. E-mail:wangziyu_0731@mail.ustc.edu.cn
Supported by:
摘要:
从计算机断层扫描(CT)图像中精确分割肺气管是诊断和治疗各类肺部疾病的重要前提,但气管复杂的树状结构使得获取用于深度神经网络训练的像素级标注数据极为困难。半监督学习为在有限标注数据下的气管分割提供了新的思路。然而,在气道分割任务中,大气管(气管及其主分支)和小气管(外周细支气管)在体素数量、分支数量和形态结构等方面存在显著的类内差异。严重的类内不平衡问题导致模型易在半监督学习中对占主导地位的分割类别发生过拟合,从而对外周细支气管的表征学习不足、分割精度差,限制了临床应用。针对这一问题,提出了一种新颖的基于单教师双学生三分支网络的半监督肺气管分割框架,在有限的标注数据下实现气管树状结构的精准分割。同时,设计了一个即插即用的动态阈值模块,在网络迭代训练的过程中引导网络识别具有不同分割难度的子区域。此外,还提出了一种新颖的类内区域解耦策略,通过采取不同约束优化方式对不同分割难度的子区域进行表征学习。在2个公开数据集和一个私有数据集上的实验结果表明,该方法在气管分割上优于现有最先进方法,Dice相似系数(DSC)指标达到了91.96%,气管中心线长度(TD)和气管树分支(BD)指标分别达到81.88%和78.32%,实现了CT图像中肺气管快速精准分割。
中图分类号:
汪子宇, 曹维维, 曹玉柱, 刘猛, 陈俊, 刘兆邦, 郑健. 基于类内区域动态解耦的半监督肺气管分割[J]. 图学学报, 2025, 46(4): 763-774.
WANG Ziyu, CAO Weiwei, CAO Yuzhu, LIU Meng, CHEN Jun, LIU Zhaobang, ZHENG Jian. Semi-supervised pulmonary airway segmentation based on dynamically decoupling intra-class regions[J]. Journal of Graphics, 2025, 46(4): 763-774.
图1 气管分割中严重的类内不平衡问题示意图((a) 气管结构;(b) 胸部CT横切面图像;(c) 局部细节;(d) 不同级别气管体素数量和分支数量统计)
Fig. 1 Illustrations of the severe intra-class imbalance problem in airway segmentation ((a) Airway structure; (b) Chest CT images in the transverse plane; (c) Local details; (d) Statistics of voxel counts and branch numbers of airway at different levels)
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 85.76 | 67.64 | 56.18 |
3D U-Net | 20 | 0 | 90.65 | 75.67 | 65.60 |
3D U-Net | 100 | 0 | 93.48 | 84.38 | 76.48 |
MT | 10 | 90 | 90.77 | 70.25 | 58.05 |
UA-MT | 10 | 90 | 91.25 | 75.04 | 64.29 |
CPS | 10 | 90 | 90.53 | 75.60 | 65.06 |
ICT | 10 | 90 | 91.14 | 75.95 | 66.16 |
URPC | 10 | 90 | 91.24 | 76.94 | 65.78 |
BCP | 10 | 90 | 89.10 | 70.52 | 59.93 |
Ours | 10 | 90 | 91.61 | 81.34 | 72.93 |
MT | 20 | 80 | 91.46 | 77.47 | 67.28 |
UA-MT | 20 | 80 | 91.49 | 77.10 | 67.56 |
CPS | 20 | 80 | 91.41 | 78.87 | 69.57 |
ICT | 20 | 80 | 91.12 | 79.89 | 70.90 |
URPC | 20 | 80 | 90.74 | 78.93 | 69.01 |
BCP | 20 | 80 | 90.55 | 71.84 | 61.10 |
Ours | 20 | 80 | 91.96 | 81.88 | 73.82 |
表1 ATM22数据集上对比实验/%
Table 1 Comparison experiment on ATM22 dataset/%
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 85.76 | 67.64 | 56.18 |
3D U-Net | 20 | 0 | 90.65 | 75.67 | 65.60 |
3D U-Net | 100 | 0 | 93.48 | 84.38 | 76.48 |
MT | 10 | 90 | 90.77 | 70.25 | 58.05 |
UA-MT | 10 | 90 | 91.25 | 75.04 | 64.29 |
CPS | 10 | 90 | 90.53 | 75.60 | 65.06 |
ICT | 10 | 90 | 91.14 | 75.95 | 66.16 |
URPC | 10 | 90 | 91.24 | 76.94 | 65.78 |
BCP | 10 | 90 | 89.10 | 70.52 | 59.93 |
Ours | 10 | 90 | 91.61 | 81.34 | 72.93 |
MT | 20 | 80 | 91.46 | 77.47 | 67.28 |
UA-MT | 20 | 80 | 91.49 | 77.10 | 67.56 |
CPS | 20 | 80 | 91.41 | 78.87 | 69.57 |
ICT | 20 | 80 | 91.12 | 79.89 | 70.90 |
URPC | 20 | 80 | 90.74 | 78.93 | 69.01 |
BCP | 20 | 80 | 90.55 | 71.84 | 61.10 |
Ours | 20 | 80 | 91.96 | 81.88 | 73.82 |
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 86.77 | 59.87 | 48.32 |
3D U-Net | 20 | 0 | 90.97 | 66.25 | 54.40 |
3D U-Net | 100 | 0 | 92.30 | 79.89 | 71.43 |
MT | 10 | 90 | 90.99 | 66.76 | 55.58 |
UA-MT | 10 | 90 | 90.53 | 67.45 | 56.49 |
CPS | 10 | 90 | 91.27 | 69.25 | 58.34 |
ICT | 10 | 90 | 91.64 | 71.33 | 60.53 |
URPC | 10 | 90 | 91.51 | 71.56 | 61.52 |
BCP | 10 | 90 | 87.99 | 63.34 | 52.09 |
Ours | 10 | 90 | 91.83 | 73.36 | 63.01 |
MT | 20 | 80 | 91.66 | 70.17 | 59.83 |
UA-MT | 20 | 80 | 91.60 | 71.46 | 61.01 |
CPS | 20 | 80 | 92.24 | 73.17 | 62.41 |
ICT | 20 | 80 | 92.27 | 74.56 | 64.80 |
URPC | 20 | 80 | 91.63 | 73.59 | 64.46 |
BCP | 20 | 80 | 91.05 | 70.34 | 59.54 |
Ours | 20 | 80 | 92.34 | 75.80 | 66.02 |
表2 BAS数据集上对比实验/%
Table 2 Comparison experiment on BAS dataset/%
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 86.77 | 59.87 | 48.32 |
3D U-Net | 20 | 0 | 90.97 | 66.25 | 54.40 |
3D U-Net | 100 | 0 | 92.30 | 79.89 | 71.43 |
MT | 10 | 90 | 90.99 | 66.76 | 55.58 |
UA-MT | 10 | 90 | 90.53 | 67.45 | 56.49 |
CPS | 10 | 90 | 91.27 | 69.25 | 58.34 |
ICT | 10 | 90 | 91.64 | 71.33 | 60.53 |
URPC | 10 | 90 | 91.51 | 71.56 | 61.52 |
BCP | 10 | 90 | 87.99 | 63.34 | 52.09 |
Ours | 10 | 90 | 91.83 | 73.36 | 63.01 |
MT | 20 | 80 | 91.66 | 70.17 | 59.83 |
UA-MT | 20 | 80 | 91.60 | 71.46 | 61.01 |
CPS | 20 | 80 | 92.24 | 73.17 | 62.41 |
ICT | 20 | 80 | 92.27 | 74.56 | 64.80 |
URPC | 20 | 80 | 91.63 | 73.59 | 64.46 |
BCP | 20 | 80 | 91.05 | 70.34 | 59.54 |
Ours | 20 | 80 | 92.34 | 75.80 | 66.02 |
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 80.61 | 49.22 | 40.37 |
3D U-Net | 20 | 0 | 81.42 | 52.77 | 42.85 |
3D U-Net | 100 | 0 | 83.78 | 60.56 | 48.99 |
MT | 10 | 90 | 82.78 | 49.19 | 40.46 |
UA-MT | 10 | 90 | 83.11 | 53.81 | 44.53 |
CPS | 10 | 90 | 82.51 | 54.23 | 43.92 |
ICT | 10 | 90 | 82.96 | 55.56 | 45.31 |
URPC | 10 | 90 | 83.92 | 55.44 | 45.81 |
BCP | 10 | 90 | 82.80 | 52.12 | 42.59 |
Ours | 10 | 90 | 83.87 | 58.98 | 48.63 |
MT | 20 | 80 | 83.64 | 56.95 | 46.56 |
UA-MT | 20 | 80 | 83.40 | 57.22 | 46.73 |
CPS | 20 | 80 | 82.78 | 55.81 | 45.61 |
ICT | 20 | 80 | 83.63 | 58.80 | 47.92 |
URPC | 20 | 80 | 83.57 | 58.43 | 47.91 |
BCP | 20 | 80 | 82.84 | 52.53 | 43.32 |
Ours | 20 | 80 | 83.88 | 59.40 | 48.54 |
表3 私有数据集上对比实验/%
Table 3 Comparison experiment on a real-world dataset/%
方法 | 标注使用比例 | 评价指标 | |||
---|---|---|---|---|---|
有标签 | 无标签 | Dice | TD | BD | |
3D U-Net | 10 | 0 | 80.61 | 49.22 | 40.37 |
3D U-Net | 20 | 0 | 81.42 | 52.77 | 42.85 |
3D U-Net | 100 | 0 | 83.78 | 60.56 | 48.99 |
MT | 10 | 90 | 82.78 | 49.19 | 40.46 |
UA-MT | 10 | 90 | 83.11 | 53.81 | 44.53 |
CPS | 10 | 90 | 82.51 | 54.23 | 43.92 |
ICT | 10 | 90 | 82.96 | 55.56 | 45.31 |
URPC | 10 | 90 | 83.92 | 55.44 | 45.81 |
BCP | 10 | 90 | 82.80 | 52.12 | 42.59 |
Ours | 10 | 90 | 83.87 | 58.98 | 48.63 |
MT | 20 | 80 | 83.64 | 56.95 | 46.56 |
UA-MT | 20 | 80 | 83.40 | 57.22 | 46.73 |
CPS | 20 | 80 | 82.78 | 55.81 | 45.61 |
ICT | 20 | 80 | 83.63 | 58.80 | 47.92 |
URPC | 20 | 80 | 83.57 | 58.43 | 47.91 |
BCP | 20 | 80 | 82.84 | 52.53 | 43.32 |
Ours | 20 | 80 | 83.88 | 59.40 | 48.54 |
图4 不同方法在ATM22数据集上生成的分割结果可视化((a) 原始图像;(b) 金标准;(c) 本方法;(d) 仅使用有标签数据;(e) MT;(f) UAMT;(g) CPS;(h) ICT;(i) URPC;(j) BCP)
Fig. 4 Visualization of the segmentation results produced by different methods on the ATM22 dataset ((a) Image; (b) Ground truth; (c) Ours; (d) Sup only; (e) MT; (f) UAMT; (g) CPS; (h) ICT; (i) URPC; (j) BCP)
数据集 | 标注使用比例/% | 模块 | 指标/% | |||||
---|---|---|---|---|---|---|---|---|
有标注 | 无标注 | 基线模型 | 困难区域 | 容易区域 | Dice | TD | BD | |
ATM22 | 10 | 0 | 85.76 | 67.64 | 56.18 | |||
10 | 90 | √ | 89.84 | 73.98 | 63.35 | |||
10 | 90 | √ | √ | 91.69 | 78.97 | 69.31 | ||
10 | 90 | √ | √ | 91.86 | 80.56 | 71.84 | ||
10 | 90 | √ | √ | √ | 91.61 | 81.34 | 72.93 | |
BAS | 10 | 0 | 86.77 | 59.87 | 48.32 | |||
10 | 90 | √ | 91.16 | 70.06 | 59.38 | |||
10 | 90 | √ | √ | 91.89 | 72.6 | 62.05 | ||
10 | 90 | √ | √ | 91.96 | 72.73 | 62.63 | ||
10 | 90 | √ | √ | √ | 91.83 | 73.36 | 63.01 |
表4 ATM22和BAS数据集上消融实验结果
Table 4 Results of the ablation study of each component on the ATM22 and BAS datasets
数据集 | 标注使用比例/% | 模块 | 指标/% | |||||
---|---|---|---|---|---|---|---|---|
有标注 | 无标注 | 基线模型 | 困难区域 | 容易区域 | Dice | TD | BD | |
ATM22 | 10 | 0 | 85.76 | 67.64 | 56.18 | |||
10 | 90 | √ | 89.84 | 73.98 | 63.35 | |||
10 | 90 | √ | √ | 91.69 | 78.97 | 69.31 | ||
10 | 90 | √ | √ | 91.86 | 80.56 | 71.84 | ||
10 | 90 | √ | √ | √ | 91.61 | 81.34 | 72.93 | |
BAS | 10 | 0 | 86.77 | 59.87 | 48.32 | |||
10 | 90 | √ | 91.16 | 70.06 | 59.38 | |||
10 | 90 | √ | √ | 91.89 | 72.6 | 62.05 | ||
10 | 90 | √ | √ | 91.96 | 72.73 | 62.63 | ||
10 | 90 | √ | √ | √ | 91.83 | 73.36 | 63.01 |
图5 困难区域可视化(数字代表困难体素数量) ((a) 金标准;(b) 训练2 000次;(c) 训练10 000次;(d) 训练20 000次)
Fig. 5 Visualization of the complicated regions (The number represents the voxel count in the complicated regions) ((a) Ground truth; (b) 2 000 iterations; (c) 10 000 iterations; (d) 20 000 iterations)
图7 消融实验可视化结果((a) 原始图像;(b) 金标准;(c) 基线建模;(d) 基线建模+困难区域;(e) 基线建模+容易区域;(f) 基线建模+全部)
Fig. 7 Visualization of the ablation experiment results ((a) Image; (b) Ground truth; (c) Baseline; (d) Baseline+Complicated; (e) Baseline+Simple; (f) Baseline+Complicated+Simple))
数据集 | 标注使用比例/% | 方法 | 指标/% | |||
---|---|---|---|---|---|---|
有标注 | 无标注 | Dice | TD | BD | ||
ATM22 | 10 | 0 | Fixed Thershold | 89.87 | 71.84 | 61.15 |
10 | 90 | Entropy | 91.60 | 79.30 | 70.03 | |
10 | 90 | Prototype | 91.97 | 78.52 | 69.08 | |
10 | 90 | Ours | 91.61 | 81.34 | 72.93 | |
BAS | 10 | 0 | Fixed Thershold | 85.34 | 50.87 | 40.78 |
10 | 90 | Entropy | 91.26 | 70.87 | 60.47 | |
10 | 90 | Prototype | 91.48 | 71.24 | 60.21 | |
10 | 90 | Ours | 91.83 | 73.36 | 63.01 |
表5 不同阈值方法的实验结果
Table 5 Comparison results of different thresholding measures
数据集 | 标注使用比例/% | 方法 | 指标/% | |||
---|---|---|---|---|---|---|
有标注 | 无标注 | Dice | TD | BD | ||
ATM22 | 10 | 0 | Fixed Thershold | 89.87 | 71.84 | 61.15 |
10 | 90 | Entropy | 91.60 | 79.30 | 70.03 | |
10 | 90 | Prototype | 91.97 | 78.52 | 69.08 | |
10 | 90 | Ours | 91.61 | 81.34 | 72.93 | |
BAS | 10 | 0 | Fixed Thershold | 85.34 | 50.87 | 40.78 |
10 | 90 | Entropy | 91.26 | 70.87 | 60.47 | |
10 | 90 | Prototype | 91.48 | 71.24 | 60.21 | |
10 | 90 | Ours | 91.83 | 73.36 | 63.01 |
[1] |
尤堃, 郝鹏翼, 吴福理, 等. 基于三维卷积神经网络的肺结节假阳性筛查[J]. 图学学报, 2019, 40(3): 423-428.
DOI |
YOU K, HAO P Y, WU F L, et al. False positive reduction of pulmonary nodules using 3D CNN[J]. Journal of Graphics, 2019, 40(3): 423-428 (in Chinese).
DOI |
|
[2] |
蒋武君, 支力佳, 张少敏, 等. 基于通道残差嵌套U结构的CT影像肺结节分割方法[J]. 图学学报, 2023, 44(5): 879-889.
DOI |
JIANG W J, ZHI L J, ZHANG S M, et al. CT image segmentation of lung nodules based on channel residual nested U structure[J]. Journal of Graphics, 2023, 44(5): 879-889 (in Chinese). | |
[3] |
FETITA C I, PRETEUX F, BEIGELMAN-AUBRY C, et al. Pulmonary airways: 3-D reconstruction from multislice CT and clinical investigation[J]. IEEE Transactions on Medical Imaging, 2004, 23(11): 1353-1364.
PMID |
[4] |
HOWLING S J, EVANS T W, HANSELL D M. The significance of bronchial dilatation on CT in patients with adult respiratory distress syndrome[J]. Clinical Radiology, 1998, 53(2): 105-109.
DOI PMID |
[5] | QIN Y L, ZHENG H, GU Y, et al. Learning tubule-sensitive CNNs for pulmonary airway and artery-vein segmentation in CT[J]. IEEE Transactions on Medical Imaging, 2021, 40(6): 1603-1617. |
[6] | WANG P Y, GUO D Z, ZHENG D D, et al. Accurate airway tree segmentation in CT scans via anatomy-aware multi-class segmentation and topology-guided iterative learning[J]. IEEE Transactions on Medical Imaging, 2024, 43(12): 4294-4306. |
[7] |
CHARBONNIER J P, VAN RIKXOORT E M, SETIO A A A, et al. Improving airway segmentation in computed tomography using leak detection with convolutional networks[J]. Medical Image Analysis, 2017, 36: 52-60.
DOI PMID |
[8] |
YUN J, PARK J, YU D, et al. Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net[J]. Medical Image Analysis, 2019, 51: 13-20.
DOI PMID |
[9] | ZHANG M H, WU Y Q, ZHANG H X, et al. Multi-site, multi-domain airway tree modeling[J]. Medical Image Analysis, 2023, 90: 102957. |
[10] | QIN Y L, CHEN M J, ZHENG H, et al. AirwayNet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks[C]// The 22nd International Conference on Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. Cham: Springer, 2019: 212-220. |
[11] | OUALI Y, HUDELOT C, TAMI M. Semi-supervised semantic segmentation with cross-consistency training[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 12671-12681. |
[12] | CHEN X K, YUAN Y H, ZENG G, et al. Semi-supervised semantic segmentation with cross pseudo supervision[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 2613-2622. |
[13] | TARVAINEN A, VALPOLA H. Mean teachers are better role models: weight-averaged consistency targets improve semi- supervised deep learning results[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 1195-1204. |
[14] | SHEN N, XU T F, BIAN Z Y, et al. SCANet: a unified semi-supervised learning framework for vessel segmentation[J]. IEEE Transactions on Medical Imaging, 2023, 42(9): 2476-2489. |
[15] | YU L Q, WANG S J, LI X M, et al. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation[C]// The 22nd International Conference on Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. Cham: Springer, 2019: 605-613. |
[16] | LUO X D, LIAO W J, CHEN J N, et al. Efficient semi-supervised gross target volume of nasopharyngeal carcinoma segmentation via uncertainty rectified pyramid consistency[C]// The 24th International Conference on Medical Image Computing and Computer Assisted Intervention - MICCAI 2021. Cham: Springer, 2021: 318-329. |
[17] | VERMA V, KAWAGUCHI K, LAMB A, et al. Interpolation consistency training for semi-supervised learning[J]. Neural Networks, 2022, 145: 90-106. |
[18] | BAI Y H, CHEN D W, LI Q L, et al. Bidirectional copy-paste for semi-supervised medical image segmentation[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 11514-11524. |
[19] | BASAK H, GHOSAL S, SARKAR R. Addressing class imbalance in semi-supervised image segmentation: a study on cardiac MRI[C]// The 25th International Conference on Medical Image Computing and Computer Assisted Intervention - MICCAI 2022. Cham: Springer, 2022: 224-233. |
[20] | LIN Y Q, YAO H F, LI Z Z, et al. Calibrating label distribution for class-imbalanced barely-supervised knee segmentation[C]// The 25th International Conference on Medical Image Computing and Computer Assisted Intervention - MICCAI 2022. Cham: Springer, 2022: 109-118. |
[21] | ZHAO X Y, QI Z X, WANG S, et al. RCPS: rectified contrastive pseudo supervision for semi-supervised medical image segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2024, 28(1): 251-261. |
[22] | SHEN Z Q, CAO P, YANG H, et al. Co-training with high-confidence pseudo labels for semi-supervised medical image segmentation[C]// The 32nd International Joint Conference on Artificial Intelligence. New York: ACM, 2023: 4199-4207. |
[23] | XU Z, WANG Y X, LU D H, et al. Ambiguity-selective consistency regularization for mean-teacher semi-supervised medical image segmentation[J]. Medical Image Analysis, 2023, 88: 102880. |
[24] | ÇIÇEK Ö, ABDULKADIR A, LIENKAMP S S, et al.3D U-Net: learning dense volumetric segmentation from sparse annotation[C]// The 19th International Conference on Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. Cham: Springer, 2016: 424-432. |
[25] | NORTHCUTT C, JIANG L, CHUANG I. Confident learning: estimating uncertainty in dataset labels[J]. Journal of Artificial Intelligence Research, 2021, 70: 1373-1411. |
[26] | HE K M, ZHANG X Y, REN S Q, et al. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification[C]// 2015 IEEE International Conference on Computer Vision. New York: IEEE Press, 2015: 1026-1034. |
[27] |
ARMATO S G, MCLENNAN G, BIDAUT L, et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans[J]. Medical Physics, 2011, 38(2): 915-931.
PMID |
[28] |
HOFMANNINGER J, PRAYER F, PAN J, et al. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem[J]. European Radiology Experimental, 2020, 4(1): 50.
DOI PMID |
[29] | ZHANG M H, GU Y. Towards connectivity-aware pulmonary airway segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2024, 28(1): 321-332. |
[30] | LEE T C, KASHYAP R L, CHU C N. Building skeleton models via 3-D medial surface axis thinning algorithms[J]. CVGIP: Graphical Models and Image Processing, 1994, 56(6): 462-478. |
[31] | CZOLBE S, ARNAVAZ K, KRAUSE O, et al. Is segmentation uncertainty useful?[C]// The 27th International Conference on Information Processing in Medical Imaging. Cham: Springer, 2021: 715-726. |
[32] | HAN J F, LUO P, WANG X G. Deep self-learning from noisy labels[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 5137-5146. |
[1] | 王道累, 丁子健, 杨君, 郑劭恺, 朱瑞, 赵文彬. 基于体素网格特征的NeRF大场景重建方法[J]. 图学学报, 2025, 46(3): 502-509. |
[2] | 孙浩, 谢滔, 何龙, 郭文忠, 虞永方, 吴其军, 王建伟, 东辉. 多模态文本视觉大模型机器人地形感知算法研究[J]. 图学学报, 2025, 46(3): 558-567. |
[3] | 翟永杰, 王璐瑶, 赵晓瑜, 胡哲东, 王乾铭, 王亚茹. 基于级联查询-位置关系的输电线路多金具检测方法[J]. 图学学报, 2025, 46(2): 288-299. |
[4] | 潘树焱, 刘立群. MSFAFuse:基于多尺度特征信息与注意力机制的SAR和可见光图像融合模型[J]. 图学学报, 2025, 46(2): 300-311. |
[5] | 张天圣, 朱闽峰, 任怡雯, 王琛涵, 张立冬, 张玮, 陈为. BPA-SAM:面向工笔画数据的SAM边界框提示增强方法[J]. 图学学报, 2025, 46(2): 322-331. |
[6] | 孙禾衣, 李艺潇, 田希, 张松海. 结合程序内容生成与扩散模型的图像到三维瓷瓶生成技术[J]. 图学学报, 2025, 46(2): 332-344. |
[7] | 方程浩, 王康侃. 基于半监督学习的单视角点云三维人体姿态与形状估计[J]. 图学学报, 2025, 46(2): 393-401. |
[8] | 陈瑞启, 刘晓飞, 万峰, 侯鹏, 沈金屹. 数字孪生驱动的卫星太阳翼展开测试仿真与预测方法[J]. 图学学报, 2025, 46(2): 449-458. |
[9] | 汪颜, 张牧雨, 刘秀珍. 基于深度学习的电影海报视觉互动意义评价方法[J]. 图学学报, 2025, 46(1): 221-232. |
[10] | 李琼, 考月英, 张莹, 徐沛. 面向无人机航拍图像的目标检测研究综述[J]. 图学学报, 2024, 45(6): 1145-1164. |
[11] | 刘灿锋, 孙浩, 东辉. 结合Transformer与Kolmogorov Arnold网络的分子扩增时序预测研究[J]. 图学学报, 2024, 45(6): 1256-1265. |
[12] | 宋思程, 陈辰, 李晨辉, 王长波. 基于密度图多目标追踪的时空数据可视化[J]. 图学学报, 2024, 45(6): 1289-1300. |
[13] | 王宗继, 刘云飞, 陆峰. Cloud Sphere: 一种基于渐进式变形自编码的三维模型表征方法[J]. 图学学报, 2024, 45(6): 1375-1388. |
[14] | 许丹丹, 崔勇, 张世倩, 刘雨聪, 林予松. 优化医学影像三维渲染可视化效果:技术综述[J]. 图学学报, 2024, 45(5): 879-891. |
[15] | 胡凤阔, 叶兰, 谭显峰, 张钦展, 胡志新, 方清, 王磊, 满孝锋. 一种基于改进YOLOv8的轻量化路面病害检测算法[J]. 图学学报, 2024, 45(5): 892-900. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||