图学学报 ›› 2023, Vol. 44 ›› Issue (2): 201-215.DOI: 10.11996/JG.j.2095-302X.2023020201
收稿日期:
2022-05-21
接受日期:
2022-09-26
出版日期:
2023-04-30
发布日期:
2023-05-01
通讯作者:
吴晓群(1984-),女,副教授,博士。主要研究方向为计算机图形学、数字几何处理、图像处理。E-mail:作者简介:
杨柳(1998-),女,硕士研究生。主要研究方向为计算机图形学、数字几何处理、图像处理。E-mail:yliu112825@163.com
基金资助:
YANG Liu1,2(), WU Xiao-qun1,2(
)
Received:
2022-05-21
Accepted:
2022-09-26
Online:
2023-04-30
Published:
2023-05-01
Contact:
WU Xiao-qun (1984-), associate professor, Ph.D. Her main research interests cover computer graphics, digital geometry processing, image processing. E-mail:About author:
YANG Liu (1998-), master student. Her main research interests cover computer graphics, digital geometry processing, image processing. E-mail:yliu112825@163.com
Supported by:
摘要:
三维形状补全是计算机图形学与计算机视觉的基础任务之一,具有广泛的应用背景。其目的旨在从部分缺失的形状数据中推断出完整的形状。针对现有基于深度学习的三维模型补全算法进行概述,根据描述符的形式不同,主要将其分为基于二维形状描述符的补全方法和基于三维形状描述符的补全方法两类。前者即将三维模型投影到二维空间中进行特征提取进而获得完整模型,包括基于二维图像和基于深度图的三维模型补全方法;后者即直接利用三维表示进行模型补全,按照对三维模型的表示方式不同,可进一步分为基于体素、基于点云和基于隐式的方法。同时,汇总了现有基于深度学习的三维模型补全算法所涉及的数据集与评价标准,并对该算法目前存在的问题进行分析和讨论,展望未来研究的新方向。
中图分类号:
杨柳, 吴晓群. 基于深度学习的三维形状补全研究综述[J]. 图学学报, 2023, 44(2): 201-215.
YANG Liu, WU Xiao-qun. 3D shape completion via deep learning: a method survey[J]. Journal of Graphics, 2023, 44(2): 201-215.
名称 | 出处 | 相机数 | 类别数 | 模型总数 |
---|---|---|---|---|
PCN dataset[ | PCN | 8 | 8 | 30 974 |
Completion3D[ | TOP-Net | 1 | 8 | 29 774 |
PF-Net dataset[ | PF-Net | - | 13 | 14 473 |
ShapeNet-ViPC[ | ViPC | 24 | 13 | 38 328 |
MVP dataset[ | VRC-Net | 26 | 16 | 140 000 |
表1 ShapeNet不完整模型数据集
Table 1 ShapeNet incompletion shape dataset
名称 | 出处 | 相机数 | 类别数 | 模型总数 |
---|---|---|---|---|
PCN dataset[ | PCN | 8 | 8 | 30 974 |
Completion3D[ | TOP-Net | 1 | 8 | 29 774 |
PF-Net dataset[ | PF-Net | - | 13 | 14 473 |
ShapeNet-ViPC[ | ViPC | 24 | 13 | 38 328 |
MVP dataset[ | VRC-Net | 26 | 16 | 140 000 |
方法 | 年份 | 思想 | 是否依赖GroundTruth完整模型 | 缺点 | 数据集 |
---|---|---|---|---|---|
MVCN[ | 2019 | 多视图补全 | 是 | 需要完整模型训练 | ShapeNet KITTI |
MVCI[ | 2020 | 一致性推理 | 是 | 补全精度不佳 | ShapeNet KITTI |
Weakly-supervised[ | 2020 | 弱监督补全 | 否 | 遗漏细节 | ShapeNet |
ViPC[ | 2021 | 视图获得信息 | 是 | - | ShapeNet-VIPC |
表2 基于二维形状描述符的补全方法
Table 2 2D shape descriptor-based completion method
方法 | 年份 | 思想 | 是否依赖GroundTruth完整模型 | 缺点 | 数据集 |
---|---|---|---|---|---|
MVCN[ | 2019 | 多视图补全 | 是 | 需要完整模型训练 | ShapeNet KITTI |
MVCI[ | 2020 | 一致性推理 | 是 | 补全精度不佳 | ShapeNet KITTI |
Weakly-supervised[ | 2020 | 弱监督补全 | 否 | 遗漏细节 | ShapeNet |
ViPC[ | 2021 | 视图获得信息 | 是 | - | ShapeNet-VIPC |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
3D-EPN[ | 2017 | 体素补全 | 捕获形状信息 | 低分辨率输出 | ShapeNet |
3D-FCN[ | 2017 | 局部几何细化 | 提炼局部几何 | - 数据指数级增长 - | ShapeNet |
Mesh R-CNN[ | 2019 | 利用网格顶点边缘卷积进行细化 | 提高边缘感知 | ShapeNet | |
O-CNNs[ | 2020 | 体素转化为八叉树 | 提高补全鲁棒性 | ShapeNet | |
GR-Net[ | 2020 | 单元格加权卷积 | 空间感知特征 | 传感器影响 | PCN datasets TopNet datasets KITTI |
CarveNet[ | 2021 | 逐点卷积 | 减少冗余点 | 时间空间复杂度高 | ShapeNet KITTI |
VE-PCN[ | 2021 | 体素结合边缘信息 | 重建高分辨率 | - | PCN datasets TopNet datasets |
表3 基于体素的补全方法
Table 3 Voxel-based completion method
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
3D-EPN[ | 2017 | 体素补全 | 捕获形状信息 | 低分辨率输出 | ShapeNet |
3D-FCN[ | 2017 | 局部几何细化 | 提炼局部几何 | - 数据指数级增长 - | ShapeNet |
Mesh R-CNN[ | 2019 | 利用网格顶点边缘卷积进行细化 | 提高边缘感知 | ShapeNet | |
O-CNNs[ | 2020 | 体素转化为八叉树 | 提高补全鲁棒性 | ShapeNet | |
GR-Net[ | 2020 | 单元格加权卷积 | 空间感知特征 | 传感器影响 | PCN datasets TopNet datasets KITTI |
CarveNet[ | 2021 | 逐点卷积 | 减少冗余点 | 时间空间复杂度高 | ShapeNet KITTI |
VE-PCN[ | 2021 | 体素结合边缘信息 | 重建高分辨率 | - | PCN datasets TopNet datasets |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
PCN[ | 2018 | MLP+折叠解码 | 捕获空间几何信息 | 低分辨率输出 | ShapeNet PCN datasets |
Top-Net[ | 2019 | 树形结构解码 | 点云的拓扑或结构一致性问题 | 数据指数级增长 | ShapeNet TopNet datasets |
PMP-Net[ | 2021 | 建立点对应关系 | - | - | - |
SA-Net[ | 2020 | 跳跃注意连接解码 | - | 时空复杂度 | PCN datasets KITTI |
CR-Net[ | 2020 | 级联细化 | - | 输出固定数量点云 | PCN datasets TopNet datasets |
LSP[ | 2020 | 形状先验 | 保留局部几何细节 | 需要输入完整点云, 利用完整点云监督训练 | PCN datasets TopNet datasets |
SoftPoolNet [ | 2020 | 基于软池思想 | - | - | ShapeNet KITTI |
ASHF-Net[ | 2021 | 自适应采样和分层折叠 | ShapeNet KITTI | ||
VRCNet[ | 2021 | 概率建模 | MVP dataset | ||
AtlasNet[ | 2018 | 参数化映射曲面 | 生成任意分辨率的形状时 避免内存占用问题 | - | ShapeNets |
MSN[ | 2020 | 估计参数曲面 | 生成均匀分布点云并保持局部细节 | - | ShapeNets |
SAUM[ | 2020 | 双支对称感知 | 提高分辨率 | - | PCN datasets TopNet datasets KITTI |
表4 基于PointNet网络的点云补全方法
Table 4 PointNet-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
PCN[ | 2018 | MLP+折叠解码 | 捕获空间几何信息 | 低分辨率输出 | ShapeNet PCN datasets |
Top-Net[ | 2019 | 树形结构解码 | 点云的拓扑或结构一致性问题 | 数据指数级增长 | ShapeNet TopNet datasets |
PMP-Net[ | 2021 | 建立点对应关系 | - | - | - |
SA-Net[ | 2020 | 跳跃注意连接解码 | - | 时空复杂度 | PCN datasets KITTI |
CR-Net[ | 2020 | 级联细化 | - | 输出固定数量点云 | PCN datasets TopNet datasets |
LSP[ | 2020 | 形状先验 | 保留局部几何细节 | 需要输入完整点云, 利用完整点云监督训练 | PCN datasets TopNet datasets |
SoftPoolNet [ | 2020 | 基于软池思想 | - | - | ShapeNet KITTI |
ASHF-Net[ | 2021 | 自适应采样和分层折叠 | ShapeNet KITTI | ||
VRCNet[ | 2021 | 概率建模 | MVP dataset | ||
AtlasNet[ | 2018 | 参数化映射曲面 | 生成任意分辨率的形状时 避免内存占用问题 | - | ShapeNets |
MSN[ | 2020 | 估计参数曲面 | 生成均匀分布点云并保持局部细节 | - | ShapeNets |
SAUM[ | 2020 | 双支对称感知 | 提高分辨率 | - | PCN datasets TopNet datasets KITTI |
方法 | 年份 | 思想 | 优点 | 数据集 |
---|---|---|---|---|
PF-Net[ | 2020 | 分形几何 | 优点保留局部特征 | ShapeNet PF-Net datasets |
NSFA[ | 2020 | 分离特征聚合 | 潜在形状预测 | ShapeNet KITTI |
SK-PCN[ | 2020 | 骨架点位移 | 保持形状拓扑和局部细节 | ShapeNet |
表5 基于PointNet++网络的方法
Table 5 PointNet++-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 数据集 |
---|---|---|---|---|
PF-Net[ | 2020 | 分形几何 | 优点保留局部特征 | ShapeNet PF-Net datasets |
NSFA[ | 2020 | 分离特征聚合 | 潜在形状预测 | ShapeNet KITTI |
SK-PCN[ | 2020 | 骨架点位移 | 保持形状拓扑和局部细节 | ShapeNet |
图5 边缘卷积网络图[38] ((a)从点对xi和xj中使用全连接层hΘ()计算边缘特征eij;(b)边缘卷积操作,计算每个邻近点连接的边缘特征,并进行聚合得到新的x')
Fig. 5 Edge convolution network[38]((a) Computing an edge feature, eij from a point pair, xi and xj. hΘ() is instantiated using a fully connected layer, and the learnable parameters are its associated weights; (b) The EdgeConv operation, the output of EdgeConv is calculated by aggregating the edge features associated with all the edges emanating from each connected vertex)
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
ECG[ | 2020 | 点边缘感知 | 边缘恢复 | 错误判断光滑边缘 | ShapeNetPart TopNet datasets |
DCG[ | 2019 | 点特征边缘感知 | 细节恢复 | 忽略不连续边缘 | ShapeNet Core |
GGD[ | 2021 | 粗补全后边缘卷积 | 结果平滑 | 冗余特征 | PCN datasets KITTI Pandar40 |
PRSCN[ | 2021 | 分层组合特征 | 减少冗余信息 | - | ShapeNet |
DeCo[ | 2021 | 局部和全局信息组合输入编码 | 局部和全局相结合 | - | ShapeNet |
表6 基于图卷积网络的点云补全方法
Table 6 Graph convolutional networks-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
ECG[ | 2020 | 点边缘感知 | 边缘恢复 | 错误判断光滑边缘 | ShapeNetPart TopNet datasets |
DCG[ | 2019 | 点特征边缘感知 | 细节恢复 | 忽略不连续边缘 | ShapeNet Core |
GGD[ | 2021 | 粗补全后边缘卷积 | 结果平滑 | 冗余特征 | PCN datasets KITTI Pandar40 |
PRSCN[ | 2021 | 分层组合特征 | 减少冗余信息 | - | ShapeNet |
DeCo[ | 2021 | 局部和全局信息组合输入编码 | 局部和全局相结合 | - | ShapeNet |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
RL-GAN-Net[ | 2019 | 强化学习 | 实时补全 | 精度不足 | ShapeNet |
SpareNet[ | 2021 | 可微点渲染 | 保留局部几何细节 | - | ShapeNet,KITTI |
UPCC-AD[ | 2019 | 无监督学习 | 未配对点补全 | 丢失细节 | ScanNet,Matterport3D,KITTI |
ShapeInversion[ | 2021 | 反转变换 | 保持一致性 | 耗时 | ShapeNet,KITTI,ScanNet,Matterport3D |
Cycle4Completion[ | 2021 | 循环变换 | - | 耗时 | ShapeNet,KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet,KITTI |
表7 基于GAN网络的点云补全方法
Table 7 Generative adversarial networks-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
RL-GAN-Net[ | 2019 | 强化学习 | 实时补全 | 精度不足 | ShapeNet |
SpareNet[ | 2021 | 可微点渲染 | 保留局部几何细节 | - | ShapeNet,KITTI |
UPCC-AD[ | 2019 | 无监督学习 | 未配对点补全 | 丢失细节 | ScanNet,Matterport3D,KITTI |
ShapeInversion[ | 2021 | 反转变换 | 保持一致性 | 耗时 | ShapeNet,KITTI,ScanNet,Matterport3D |
Cycle4Completion[ | 2021 | 循环变换 | - | 耗时 | ShapeNet,KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet,KITTI |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
Cycle4Completion[ | 2021 | 循环变换 | 保持一致性 | 耗时 | ShapeNet KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet KITTI |
PoinTr[ | 2021 | 局部几何感知 | 增强局部几何关系 | - | ShapeNet-55 ShapeNet-34 |
SnowflakeNet[ | 2021 | 点云分裂生成 | 增强上下文结构关系 | 内存冗余 | PCN datasets |
表8 基于Transformer网络的点云补全方法
Table 8 Transformer-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
Cycle4Completion[ | 2021 | 循环变换 | 保持一致性 | 耗时 | ShapeNet KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet KITTI |
PoinTr[ | 2021 | 局部几何感知 | 增强局部几何关系 | - | ShapeNet-55 ShapeNet-34 |
SnowflakeNet[ | 2021 | 点云分裂生成 | 增强上下文结构关系 | 内存冗余 | PCN datasets |
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 20.14 | 13.16 | 21.80 | 20.30 | 18.81 | 25.76 | 21.89 | 21.76 | 17.64 |
PCN[ | 18.22 | 9.79 | 22.70 | 12.43 | 25.14 | 22.72 | 20.26 | 20.27 | 11.73 |
AtlasNet[ | 17.77 | 10.36 | 23.40 | 13.40 | 24.16 | 20.24 | 20.82 | 17.52 | 11.62 |
TopNet[ | 14.25 | 7.32 | 18.77 | 12.88 | 19.82 | 14.60 | 16.29 | 14.89 | 8.82 |
SoftPoolNet[ | 11.90 | 4.89 | 18.86 | 10.17 | 15.22 | 12.34 | 14.87 | 11.84 | 6.48 |
SA-Net[ | 11.22 | 5.27 | 14.45 | 7.78 | 13.67 | 13.53 | 14.22 | 11.75 | 8.84 |
GR-Net[ | 10.64 | 6.13 | 16.90 | 8.27 | 12.23 | 10.22 | 14.93 | 10.08 | 5.86 |
MSN[ | 9.60 | 5.62 | 11.93 | 8.63 | 11.64 | 10.30 | 13.18 | 9.65 | 5.88 |
PMP-Net[ | 9.23 | 3.99 | 14.70 | 8.55 | 10.21 | 9.27 | 12.43 | 8.51 | 5.77 |
CR-Net[ | 9.21 | 3.38 | 13.17 | 8.31 | 10.62 | 10.00 | 12.86 | 9.16 | 5.80 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 8.23 | 3.59 | 12.06 | 7.68 | 9.37 | 8.76 | 11.22 | 7.65 | 5.48 |
SnowflakeNet[ | 7.60 | 3.48 | 11.09 | 6.90 | 8.75 | 8.42 | 10.15 | 6.46 | 5.32 |
SpareNet[ | 7.59 | 3.97 | 11.69 | 6.60 | 8.90 | 7.36 | 11.07 | 6.06 | 5.13 |
表9 PCN dataset数据集定量比较
Table 9 Quantitative comparison for point cloud completion on eight categories objects of PCN dataset benchmark
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 20.14 | 13.16 | 21.80 | 20.30 | 18.81 | 25.76 | 21.89 | 21.76 | 17.64 |
PCN[ | 18.22 | 9.79 | 22.70 | 12.43 | 25.14 | 22.72 | 20.26 | 20.27 | 11.73 |
AtlasNet[ | 17.77 | 10.36 | 23.40 | 13.40 | 24.16 | 20.24 | 20.82 | 17.52 | 11.62 |
TopNet[ | 14.25 | 7.32 | 18.77 | 12.88 | 19.82 | 14.60 | 16.29 | 14.89 | 8.82 |
SoftPoolNet[ | 11.90 | 4.89 | 18.86 | 10.17 | 15.22 | 12.34 | 14.87 | 11.84 | 6.48 |
SA-Net[ | 11.22 | 5.27 | 14.45 | 7.78 | 13.67 | 13.53 | 14.22 | 11.75 | 8.84 |
GR-Net[ | 10.64 | 6.13 | 16.90 | 8.27 | 12.23 | 10.22 | 14.93 | 10.08 | 5.86 |
MSN[ | 9.60 | 5.62 | 11.93 | 8.63 | 11.64 | 10.30 | 13.18 | 9.65 | 5.88 |
PMP-Net[ | 9.23 | 3.99 | 14.70 | 8.55 | 10.21 | 9.27 | 12.43 | 8.51 | 5.77 |
CR-Net[ | 9.21 | 3.38 | 13.17 | 8.31 | 10.62 | 10.00 | 12.86 | 9.16 | 5.80 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 8.23 | 3.59 | 12.06 | 7.68 | 9.37 | 8.76 | 11.22 | 7.65 | 5.48 |
SnowflakeNet[ | 7.60 | 3.48 | 11.09 | 6.90 | 8.75 | 8.42 | 10.15 | 6.46 | 5.32 |
SpareNet[ | 7.59 | 3.97 | 11.69 | 6.60 | 8.90 | 7.36 | 11.07 | 6.06 | 5.13 |
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 14.21 | 7.38 | 20.95 | 17.64 | 12.48 | 13.90 | 12.63 | 13.52 | 15.21 |
PCN[ | 9.64 | 5.50 | 22.70 | 10.63 | 8.70 | 11.00 | 11.34 | 11.68 | 8.59 |
AtlasNet[ | 10.85 | 6.37 | 11.94 | 10.10 | 12.06 | 12.37 | 12.99 | 10.33 | 10.61 |
TopNet[ | 12.15 | 7.61 | 13.31 | 10.90 | 13.82 | 14.44 | 14.78 | 11.12 | 11.12 |
SoftPoolNet[ | 8.31 | 4.76 | 10.29 | 7.63 | 11.23 | 8.97 | 10.08 | 7.13 | 6.38 |
SA-Net[ | 8.24 | 6.18 | 9.11 | 5.56 | 8.94 | 7.83 | 9.98 | 9.94 | 7.23 |
GR-Net[ | 8.83 | 6.45 | 10.37 | 9.45 | 9.41 | 7.96 | 10.51 | 8.44 | 8.04 |
MSN[ | 8.76 | 5.87 | 10.85 | 9.50 | 9.67 | 7.39 | 10.68 | 8.61 | 7.51 |
PMP-Net[ | 8.73 | 5.56 | 11.24 | 9.64 | 9.51 | 6.95 | 10.83 | 8.72 | 7.25 |
CR-Net[ | 8.51 | 4.79 | 9.97 | 8.31 | 9.49 | 8.94 | 10.69 | 7.81 | 5.77 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 7.90 | 4.08 | 10.92 | 8.25 | 8.37 | 7.63 | 10.76 | 7.25 | 5.95 |
SnowflakeNet[ | 7.21 | 4.29 | 9.16 | 8.08 | 7.89 | 6.07 | 9.23 | 6.55 | 6.40 |
SpareNet[ | 7.23 | 4.16 | 9.18 | 7.63 | 7.53 | 7.03 | 9.53 | 6.35 | 6.48 |
表10 Completion3D数据集定量比较
Table 10 Quantitative comparison for point cloud completion on eight categories objects of completion3D benchmark
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 14.21 | 7.38 | 20.95 | 17.64 | 12.48 | 13.90 | 12.63 | 13.52 | 15.21 |
PCN[ | 9.64 | 5.50 | 22.70 | 10.63 | 8.70 | 11.00 | 11.34 | 11.68 | 8.59 |
AtlasNet[ | 10.85 | 6.37 | 11.94 | 10.10 | 12.06 | 12.37 | 12.99 | 10.33 | 10.61 |
TopNet[ | 12.15 | 7.61 | 13.31 | 10.90 | 13.82 | 14.44 | 14.78 | 11.12 | 11.12 |
SoftPoolNet[ | 8.31 | 4.76 | 10.29 | 7.63 | 11.23 | 8.97 | 10.08 | 7.13 | 6.38 |
SA-Net[ | 8.24 | 6.18 | 9.11 | 5.56 | 8.94 | 7.83 | 9.98 | 9.94 | 7.23 |
GR-Net[ | 8.83 | 6.45 | 10.37 | 9.45 | 9.41 | 7.96 | 10.51 | 8.44 | 8.04 |
MSN[ | 8.76 | 5.87 | 10.85 | 9.50 | 9.67 | 7.39 | 10.68 | 8.61 | 7.51 |
PMP-Net[ | 8.73 | 5.56 | 11.24 | 9.64 | 9.51 | 6.95 | 10.83 | 8.72 | 7.25 |
CR-Net[ | 8.51 | 4.79 | 9.97 | 8.31 | 9.49 | 8.94 | 10.69 | 7.81 | 5.77 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 7.90 | 4.08 | 10.92 | 8.25 | 8.37 | 7.63 | 10.76 | 7.25 | 5.95 |
SnowflakeNet[ | 7.21 | 4.29 | 9.16 | 8.08 | 7.89 | 6.07 | 9.23 | 6.55 | 6.40 |
SpareNet[ | 7.23 | 4.16 | 9.18 | 7.63 | 7.53 | 7.03 | 9.53 | 6.35 | 6.48 |
评价指标 | 方法 | ||||||||
---|---|---|---|---|---|---|---|---|---|
AtlasNet[ | PCN[ | TopNet[ | MSN[ | GR-Net[ | NSFA[ | CR-Net[ | VE-PCN[ | SpareNet[ | |
Consistency | 0.700 | 1.557 | 0.568 | 1.951 | 0.313 | 0.391 | 0.582 | 12.630 | 0.249 |
Fidelity | 1.759 | 2.235 | 5.354 | 0.434 | 0.816 | 0.347 | 0.337 | 0.258 | 1.461 |
MMD | 2.108 | 1.336 | 0.636 | 2.259 | 0.568 | 0.426 | 0.394 | 0.372 | 0.368 |
表11 KITTI数据集在一致性、保真度和最小匹配距离方面的定量比较
Table 11 Quantitative comparison on KITTI dataset in terms of consistency, fidelity and minimum matching distance
评价指标 | 方法 | ||||||||
---|---|---|---|---|---|---|---|---|---|
AtlasNet[ | PCN[ | TopNet[ | MSN[ | GR-Net[ | NSFA[ | CR-Net[ | VE-PCN[ | SpareNet[ | |
Consistency | 0.700 | 1.557 | 0.568 | 1.951 | 0.313 | 0.391 | 0.582 | 12.630 | 0.249 |
Fidelity | 1.759 | 2.235 | 5.354 | 0.434 | 0.816 | 0.347 | 0.337 | 0.258 | 1.461 |
MMD | 2.108 | 1.336 | 0.636 | 2.259 | 0.568 | 0.426 | 0.394 | 0.372 | 0.368 |
方法 | 年份 | 思想 | 优点 | 缺点 | |
---|---|---|---|---|---|
基于PointNet++网络的方法 | PU-Net[ | 2018 | 基于数据驱动 | 提高分辨率 | 无法估计点云的分布 |
EC-Net[ | 2018 | 增加边缘感知 | 解决边缘问题 | ||
PUGeo-Net[ | 2020 | 学习法向局部参数化 | 从低质量输入获取结果 | 无法应用到不完整数据集 | |
基于图卷积网络的方法 | 3PU[ | 2019 | 渐进式上采样 | 提高细节保留 | - |
PU-EVA[ | 2021 | 基于边缘向量 | 实现任意上采样率 | 无法应用到没有配对的数据集 | |
PU-GCN[ | 2021 | 设计多个上采样模块 | 不需要较多的参数实现 | - | |
基于GAN网络的方法 | PU-GAN[ | 2019 | 基于对抗网络 | 增强分布的均匀性 | 点云数量固定 |
表12 点云上采样方法
Table 12 Point cloud upsampling method
方法 | 年份 | 思想 | 优点 | 缺点 | |
---|---|---|---|---|---|
基于PointNet++网络的方法 | PU-Net[ | 2018 | 基于数据驱动 | 提高分辨率 | 无法估计点云的分布 |
EC-Net[ | 2018 | 增加边缘感知 | 解决边缘问题 | ||
PUGeo-Net[ | 2020 | 学习法向局部参数化 | 从低质量输入获取结果 | 无法应用到不完整数据集 | |
基于图卷积网络的方法 | 3PU[ | 2019 | 渐进式上采样 | 提高细节保留 | - |
PU-EVA[ | 2021 | 基于边缘向量 | 实现任意上采样率 | 无法应用到没有配对的数据集 | |
PU-GCN[ | 2021 | 设计多个上采样模块 | 不需要较多的参数实现 | - | |
基于GAN网络的方法 | PU-GAN[ | 2019 | 基于对抗网络 | 增强分布的均匀性 | 点云数量固定 |
方法 | CD (10-2) | 时间(ms) |
---|---|---|
PU-Net[ | 5.56 | 10.04 |
3PU[ | 2.98 | 10.86 |
PU-GAN[ | 2.80 | 14.28 |
PU-GCN[ | 2.58 | 8.83 |
表13 点云上采样在Visionair数据库中的CD与时间比较
Table 13 Comparison of Point cloud upsampling by CD and time in Visionair dataset
方法 | CD (10-2) | 时间(ms) |
---|---|---|
PU-Net[ | 5.56 | 10.04 |
3PU[ | 2.98 | 10.86 |
PU-GAN[ | 2.80 | 14.28 |
PU-GCN[ | 2.58 | 8.83 |
图7 隐式函数表示[59] ((a)模型查询点;(b)三维模型;(c)完整模型)
Fig. 7 Implicit function representation[59] ((a) Model query point; (b) Three-dimensional models; (c) Complete model)
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
DeepSDF[ | 2019 | 形状条件分类器 | 处理复杂拓扑 | 消耗大量时间和数据集 | ShapeNet |
IF-Net[ | 2020 | 对特征分类 | 保留细节 | 只适用于纹理补全 | 3DBodyTex.v2 |
SA-IFN[ | 2021 | 与自注意机制结合 | 处理复杂模型 | 只适用于牙齿模型 | 牙齿模型数据 |
Vaccine-style-net[ | 2020 | 连续边界函数 | 完整曲面模型 | 间接补全 | ShapeNet |
ShapeFormer[ | 2022 | 基于Transform | 处理复杂模型 | 耗时 | ShapeNet |
PatchNets[ | 2020 | 基于块训练 | 无需多数据集 | 缺少局部细节 | ShapeNet |
DeepLS[ | 2020 | 局部学习SDF | 减少所需内存 | 无法进行细节重建 | sketchup |
表14 基于隐式的补全方法
Table 14 Implicit function-based completion method
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
DeepSDF[ | 2019 | 形状条件分类器 | 处理复杂拓扑 | 消耗大量时间和数据集 | ShapeNet |
IF-Net[ | 2020 | 对特征分类 | 保留细节 | 只适用于纹理补全 | 3DBodyTex.v2 |
SA-IFN[ | 2021 | 与自注意机制结合 | 处理复杂模型 | 只适用于牙齿模型 | 牙齿模型数据 |
Vaccine-style-net[ | 2020 | 连续边界函数 | 完整曲面模型 | 间接补全 | ShapeNet |
ShapeFormer[ | 2022 | 基于Transform | 处理复杂模型 | 耗时 | ShapeNet |
PatchNets[ | 2020 | 基于块训练 | 无需多数据集 | 缺少局部细节 | ShapeNet |
DeepLS[ | 2020 | 局部学习SDF | 减少所需内存 | 无法进行细节重建 | sketchup |
[1] | ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM: large-scale direct monocular SLAM[M]//Computer Vision - ECCV 2014. Cham: Springer International Publishing, 2014: 834-849. |
[2] | HOU J, DAI A, NIEßNER M. 3D-SIS: 3D semantic instance segmentation of RGB-D scans[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 4416-4425. |
[3] | BOUD A C, HANIFF D J, BABER C, et al. Virtual reality and augmented reality as a training tool for assembly tasks[C]// 1999 IEEE International Conference on Information Visualization. New York: IEEE Press, 1999: 32-36. |
[4] | CHANG A X, FUNKHOUSER T, GUIBAS L, et al. ShapeNet: an information-rich 3D model repository[EB/OL]. [2021-12-09]. https://arxiv.org/abs/1512.03012. |
[5] | YU X M, RAO Y M, WANG Z Y, et al. PoinTr: diverse point cloud completion with geometry-aware transformers[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 12478-12487. |
[6] | YUAN W T, KHOT T, HELD D, et al. PCN: point completion network[C]// 2018 International Conference on 3D Vision. New York: IEEE Press, 2018: 728-737. |
[7] | TCHAPMI L P, KOSARAJU V, REZATOFIGHI H, et al. TopNet: structural point cloud decoder[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 383-392. |
[8] | HUANG Z T, YU Y K, XU J W, et al. PF-net: point fractal network for 3D point cloud completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 7659-7667. |
[9] | ZHANG X C, FENG Y T, LI S Q, et al. View-guided point cloud completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 15885-15894. |
[10] | PAN L, CHEN X Y, CAI Z A, et al. Variational relational point completion network[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 8520-8529. |
[11] | WU Z R, SONG S R, KHOSLA A, et al. 3D ShapeNets: a deep representation for volumetric shapes[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2015: 1912-1920. |
[12] |
GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.
DOI URL |
[13] | GU J Y, MA W C, MANIVASAGAM S, et al. Weakly- supervised 3D shape completion in the wild[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 283-299. |
[14] | DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly- annotated 3D reconstructions of indoor scenes[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 2432-2443. |
[15] |
ZHANG S L, LI S, HAO A M, et al. Point clou semantic scene completion from RGB-D images[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3385-3393.
DOI URL |
[16] | HU T, HAN Z Z, SHRIVASTAVA A, et al. Render4Completion: synthesizing multi-view depth maps for 3D shape completion[C]// 2019 IEEE/CVF International Conference on Computer Vision Workshop. New York: IEEE Press, 2019: 4114-4122. |
[17] |
HU T, HAN Z Z, ZWICKER M. 3D shape completion with multi-view consistent inference[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10997-11004.
DOI URL |
[18] | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[19] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 5105-5114. |
[20] | DAI A, QI C R, NIEßNER M. Shape completion using 3D-encoder-predictor CNNs and shape synthesis[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6545-6554. |
[21] | HAN X G, LI Z, HUANG H B, et al. High-resolution shape completion using deep neural networks for global structure and local geometry inference[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 85-93. |
[22] | GKIOXARI G, JOHNSON J, MALIK J. Mesh R-CNN[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 9784-9794. |
[23] | WANG P S, LIU Y, TONG X. Deep octree-based CNNs with output-guided skip connections for 3D shape and scene completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE Press, 2020: 1074-1081. |
[24] | XIE H Z, YAO H X, ZHOU S C, et al. GRNet: gridding residual network for dense point cloud completion[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 365-381. |
[25] | GUO Q, WANG Z J, JUEFEI-XU F, et al. CarveNet: carving point-block for complex 3D shape completion[EB/OL]. [2022-04-21]. https://arxiv.org/abs/2107.13452. |
[26] | WANG X G, ANG M H, LEE G H. Voxel-based network for shape completion by leveraging edge generation[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 13169-13178. |
[27] | WEN X, XIANG P, HAN Z Z, et al. PMP-net: point cloud completion by learning multi-step point moving paths[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7439-7448. |
[28] | WEN X, LI T Y, HAN Z Z, et al. Point cloud completion by skip-attention network with hierarchical folding[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 1936-1945. |
[29] | WANG X G, ANG M H, LEE G H. Cascaded refinement network for point cloud completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 787-796. |
[30] | WANG X G, ANG M H, LEE G H. Point cloud completion by learning shape priors[C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2020: 10719-10726. |
[31] | WANG Y D, TAN D J, NAVAB N, et al. SoftPoolNet: shape descriptor for point cloud completion and classification[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 70-85. |
[32] |
ZONG D M, SUN S L, ZHAO J. ASHF-net: adaptive sampling and hierarchical folding network for robust point cloud completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3625-3632.
DOI URL |
[33] | GROUEIX T, FISHER M, KIM V G, et al. A papier-Mache approach to learning 3D surface generation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 216-224. |
[34] |
LIU M H, SHENG L, YANG S, et al. Morphing and sampling network for dense point cloud completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 11596-11603.
DOI URL |
[35] | SON H, KIM Y M. SAUM: symmetry-aware upsampling module for consistent point cloud completion[C]// Computer Vision - ACCV 2020: 15th Asian Conference on Computer Vision, Revised Selected Papers, Part I. New York: ACM, 2020: 158-174. |
[36] | ZHANG W X, YAN Q G, XIAO C X. Detail preserved point cloud completion via separated feature aggregation[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 512-528. |
[37] | NIE Y Y, LIN Y Q, HAN X G, et al. Skeleton-bridged point completion: from global inference to local adjustment[EB/OL]. [2022-04-21]. https://arxiv.org/abs/2010.07428. |
[38] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 146. |
[39] |
PAN L. ECG: edge-aware point cloud completion with graph convolution[J]. IEEE Robotics and Automation Letters, 2020, 5(3): 4392-4398.
DOI URL |
[40] | WANG K Q, CHEN K, JIA K. Deep cascade generation on point sets[C]// The 28th International Joint Conference on Artificial Intelligence. New York: ACM, 2019: 3726-3732. |
[41] |
SHI J Q, XU L Y, HENG L, et al. Graph-guided deformation for point cloud completion[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 7081-7088.
DOI URL |
[42] |
ZHU L P, WANG B Y, TIAN G Y, et al. Towards point cloud completion: point rank sampling and cross-cascade graph CNN[J]. Neurocomputing, 2021, 461: 1-16.
DOI URL |
[43] | ALLIEGRO A, VALSESIA D, FRACASTORO G, et al. Denoise and contrast for category agnostic shape completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4627-4636. |
[44] |
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
DOI URL |
[45] | SARMAD M, LEE H J, KIM Y M. RL-GAN-net: a reinforcement learning agent controlled GAN network for real-time point cloud shape completion[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 5891-5900. |
[46] | XIE C L, WANG C X, ZHANG B, et al. Style-based point generator with adversarial rendering for point cloud completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4617-4626. |
[47] | CHEN X L, CHEN B Q, MITRA N J. Unpaired point cloud completion on real scans using adversarial training[EB/OL]. [2022-04-21]. https://arxiv.org/abs/1904.00069. |
[48] | ZHANG J Z, CHEN X Y, CAI Z A, et al. Unsupervised 3D shape completion through GAN inversion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 1768-1777. |
[49] | WEN X, HAN Z Z, CAO Y P, et al. Cycle4Completion: unpaired point cloud completion using cycle transformation with missing region coding[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 13075-13084. |
[50] | CAI Y J, LIN K Y, ZHANG C, et al. Learning a structured latent space for unsupervised point cloud completion[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5533-5543. |
[51] | XIANG P, WEN X, LIU Y S, et al. SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5479-5489. |
[52] | YU L Q, LI X Z, FU C W, et al. PU-net: point cloud upsampling network[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 2790-2799. |
[53] | YU L Q, LI X Z, FU C W, et al. EC-net: an edge-aware point set consolidation network[M]//Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 398-414. |
[54] | QIAN Y, HOU J H, KWONG S, et al. PUGeo-net: a geometry-centric network for 3D point cloud upsampling[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 752-769. |
[55] | WANG Y F, WU S H, HUANG H, et al. Patch-based progressive 3D point set upsampling[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 5951-5960. |
[56] | QIAN G C, ABUALSHOUR A, LI G H, et al. PU-GCN: point cloud upsampling using graph convolutional networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 11678-11687. |
[57] | LUO L Q, TANG L L, ZHOU W Y, et al. PU-EVA: an edge-vector based approximation solution for flexible-scale point cloud upsampling[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 16188-16197. |
[58] | LI R H, LI X Z, FU C W, et al. PU-GAN: a point cloud upsampling adversarial network[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 7202-7211. |
[59] | PARK J J, FLORENCE P, STRAUB J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 165-174. |
[60] | CHIBANE J, ALLDIECK T, PONS-MOLL G. Implicit functions in feature space for 3D shape reconstruction and completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 6968-6979. |
[61] | PING Y H, WEI G D, YANG L, et al. Self-attention implicit function networks for 3D dental data completion[J]. Computer Aided Geometric Design, 2021, 90: 1-12. |
[62] | YAN W, ZHANG R N, WANG J, et al. Vaccine-style-net: point cloud completion in implicit continuous function space[C]// The 28th ACM International Conference on Multimedia. New York: ACM, 2020: 2067-2075. |
[63] | YAN X G, LIN L Q, MITRA N J, et al. ShapeFormer: transformer-based shape completion via sparse representation[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 6229-6239. |
[64] | TRETSCHK E, TEWARI A, GOLYANIK V, et al. PatchNets: patch-based generalizable deep implicit 3D shape representations[C]// Computer Vision - ECCV 2020: 16th European Conference. New York: ACM, 2020: 293-309. |
[65] | CHABRA R, LENSSEN J E, ILG E, et al. Deep local shapes: learning local SDF priors for detailed 3D reconstruction[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 608-625. |
[1] | 毕春艳, 刘越. 基于深度学习的视频人体动作识别综述[J]. 图学学报, 2023, 44(4): 625-639. |
[2] | 曹义亲 , 周一纬 , 徐露 .
基于 E-YOLOX 的实时金属表面缺陷检测算法
[J]. 图学学报, 2023, 44(4): 677-690. |
[3] | 李鑫, 普园媛, 赵征鹏, 徐丹, 钱文华 .
内容语义和风格特征匹配一致的艺术风格迁移
[J]. 图学学报, 2023, 44(4): 699-709. |
[4] | 邵俊棋, 钱文华, 徐启豪.
基于条件残差生成对抗网络的风景图生成
[J]. 图学学报, 2023, 44(4): 710-717. |
[5] | 邓渭铭 , 杨铁军 , 李纯纯 , 黄琳 . 基于神经网络架构搜索的铭牌目标检测方法[J]. 图学学报, 2023, 44(4): 718-727. |
[6] | 余伟群, 刘佳涛, 张亚萍.
融合注意力的拉普拉斯金字塔单目深度估计
[J]. 图学学报, 2023, 44(4): 728-738. |
[7] | 郭印宏, 王立春, 李爽.
基于重复性和特异性约束的图像特征匹配
[J]. 图学学报, 2023, 44(4): 739-746. |
[8] | 毛爱坤, 刘昕明, 陈文壮, 宋绍楼. 改进YOLOv5算法的变电站仪表目标检测方法[J]. 图学学报, 2023, 44(3): 448-455. |
[9] | 王佳婧, 王晨, 朱媛媛, 王笑梅. 基于民国纸币的图元素匹配检索[J]. 图学学报, 2023, 44(3): 492-501. |
[10] | 曾武, 朱恒亮, 邢树礼, 林江宏, 毛国君. 显著性检测引导的图像数据增强方法[J]. 图学学报, 2023, 44(2): 260-270. |
[11] | 罗启明, 吴昊, 夏信, 袁国武. 基于Dual Dense U-Net的云南壁画破损区域预测[J]. 图学学报, 2023, 44(2): 304-312. |
[12] | 李洪安 , 郑峭雪 , 陶若霖 , 张敏 , 李占利 , 康宝生 . 基于深度学习的图像超分辨率研究综述[J]. 图学学报, 2023, 44(1): 1-15. |
[13] | 单芳湄, 王梦文, 李敏.
融合注意力机制的肠道息肉分割多尺度卷积神经网络
[J]. 图学学报, 2023, 44(1): 50-58. |
[14] | 邵英杰, 尹辉, 谢颖, 黄华.
草图引导的选择循环推理式人脸图像修复网络
[J]. 图学学报, 2023, 44(1): 67-76. |
[15] | 潘东辉, 金映含, 孙旭, 刘玉生, 张东亮.
CTH-Net:从线稿和颜色点生成服装图像的
CNN-Transformer 混合网络
[J]. 图学学报, 2023, 44(1): 120-130. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||