Journal of Graphics ›› 2023, Vol. 44 ›› Issue (2): 201-215.DOI: 10.11996/JG.j.2095-302X.2023020201
• Review • Previous Articles Next Articles
YANG Liu1,2(), WU Xiao-qun1,2(
)
Received:
2022-05-21
Accepted:
2022-09-26
Online:
2023-04-30
Published:
2023-05-01
Contact:
WU Xiao-qun (1984-), associate professor, Ph.D. Her main research interests cover computer graphics, digital geometry processing, image processing. E-mail:About author:
YANG Liu (1998-), master student. Her main research interests cover computer graphics, digital geometry processing, image processing. E-mail:yliu112825@163.com
Supported by:
CLC Number:
YANG Liu, WU Xiao-qun. 3D shape completion via deep learning: a method survey[J]. Journal of Graphics, 2023, 44(2): 201-215.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2023020201
名称 | 出处 | 相机数 | 类别数 | 模型总数 |
---|---|---|---|---|
PCN dataset[ | PCN | 8 | 8 | 30 974 |
Completion3D[ | TOP-Net | 1 | 8 | 29 774 |
PF-Net dataset[ | PF-Net | - | 13 | 14 473 |
ShapeNet-ViPC[ | ViPC | 24 | 13 | 38 328 |
MVP dataset[ | VRC-Net | 26 | 16 | 140 000 |
Table 1 ShapeNet incompletion shape dataset
名称 | 出处 | 相机数 | 类别数 | 模型总数 |
---|---|---|---|---|
PCN dataset[ | PCN | 8 | 8 | 30 974 |
Completion3D[ | TOP-Net | 1 | 8 | 29 774 |
PF-Net dataset[ | PF-Net | - | 13 | 14 473 |
ShapeNet-ViPC[ | ViPC | 24 | 13 | 38 328 |
MVP dataset[ | VRC-Net | 26 | 16 | 140 000 |
方法 | 年份 | 思想 | 是否依赖GroundTruth完整模型 | 缺点 | 数据集 |
---|---|---|---|---|---|
MVCN[ | 2019 | 多视图补全 | 是 | 需要完整模型训练 | ShapeNet KITTI |
MVCI[ | 2020 | 一致性推理 | 是 | 补全精度不佳 | ShapeNet KITTI |
Weakly-supervised[ | 2020 | 弱监督补全 | 否 | 遗漏细节 | ShapeNet |
ViPC[ | 2021 | 视图获得信息 | 是 | - | ShapeNet-VIPC |
Table 2 2D shape descriptor-based completion method
方法 | 年份 | 思想 | 是否依赖GroundTruth完整模型 | 缺点 | 数据集 |
---|---|---|---|---|---|
MVCN[ | 2019 | 多视图补全 | 是 | 需要完整模型训练 | ShapeNet KITTI |
MVCI[ | 2020 | 一致性推理 | 是 | 补全精度不佳 | ShapeNet KITTI |
Weakly-supervised[ | 2020 | 弱监督补全 | 否 | 遗漏细节 | ShapeNet |
ViPC[ | 2021 | 视图获得信息 | 是 | - | ShapeNet-VIPC |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
3D-EPN[ | 2017 | 体素补全 | 捕获形状信息 | 低分辨率输出 | ShapeNet |
3D-FCN[ | 2017 | 局部几何细化 | 提炼局部几何 | - 数据指数级增长 - | ShapeNet |
Mesh R-CNN[ | 2019 | 利用网格顶点边缘卷积进行细化 | 提高边缘感知 | ShapeNet | |
O-CNNs[ | 2020 | 体素转化为八叉树 | 提高补全鲁棒性 | ShapeNet | |
GR-Net[ | 2020 | 单元格加权卷积 | 空间感知特征 | 传感器影响 | PCN datasets TopNet datasets KITTI |
CarveNet[ | 2021 | 逐点卷积 | 减少冗余点 | 时间空间复杂度高 | ShapeNet KITTI |
VE-PCN[ | 2021 | 体素结合边缘信息 | 重建高分辨率 | - | PCN datasets TopNet datasets |
Table 3 Voxel-based completion method
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
3D-EPN[ | 2017 | 体素补全 | 捕获形状信息 | 低分辨率输出 | ShapeNet |
3D-FCN[ | 2017 | 局部几何细化 | 提炼局部几何 | - 数据指数级增长 - | ShapeNet |
Mesh R-CNN[ | 2019 | 利用网格顶点边缘卷积进行细化 | 提高边缘感知 | ShapeNet | |
O-CNNs[ | 2020 | 体素转化为八叉树 | 提高补全鲁棒性 | ShapeNet | |
GR-Net[ | 2020 | 单元格加权卷积 | 空间感知特征 | 传感器影响 | PCN datasets TopNet datasets KITTI |
CarveNet[ | 2021 | 逐点卷积 | 减少冗余点 | 时间空间复杂度高 | ShapeNet KITTI |
VE-PCN[ | 2021 | 体素结合边缘信息 | 重建高分辨率 | - | PCN datasets TopNet datasets |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
PCN[ | 2018 | MLP+折叠解码 | 捕获空间几何信息 | 低分辨率输出 | ShapeNet PCN datasets |
Top-Net[ | 2019 | 树形结构解码 | 点云的拓扑或结构一致性问题 | 数据指数级增长 | ShapeNet TopNet datasets |
PMP-Net[ | 2021 | 建立点对应关系 | - | - | - |
SA-Net[ | 2020 | 跳跃注意连接解码 | - | 时空复杂度 | PCN datasets KITTI |
CR-Net[ | 2020 | 级联细化 | - | 输出固定数量点云 | PCN datasets TopNet datasets |
LSP[ | 2020 | 形状先验 | 保留局部几何细节 | 需要输入完整点云, 利用完整点云监督训练 | PCN datasets TopNet datasets |
SoftPoolNet [ | 2020 | 基于软池思想 | - | - | ShapeNet KITTI |
ASHF-Net[ | 2021 | 自适应采样和分层折叠 | ShapeNet KITTI | ||
VRCNet[ | 2021 | 概率建模 | MVP dataset | ||
AtlasNet[ | 2018 | 参数化映射曲面 | 生成任意分辨率的形状时 避免内存占用问题 | - | ShapeNets |
MSN[ | 2020 | 估计参数曲面 | 生成均匀分布点云并保持局部细节 | - | ShapeNets |
SAUM[ | 2020 | 双支对称感知 | 提高分辨率 | - | PCN datasets TopNet datasets KITTI |
Table 4 PointNet-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
PCN[ | 2018 | MLP+折叠解码 | 捕获空间几何信息 | 低分辨率输出 | ShapeNet PCN datasets |
Top-Net[ | 2019 | 树形结构解码 | 点云的拓扑或结构一致性问题 | 数据指数级增长 | ShapeNet TopNet datasets |
PMP-Net[ | 2021 | 建立点对应关系 | - | - | - |
SA-Net[ | 2020 | 跳跃注意连接解码 | - | 时空复杂度 | PCN datasets KITTI |
CR-Net[ | 2020 | 级联细化 | - | 输出固定数量点云 | PCN datasets TopNet datasets |
LSP[ | 2020 | 形状先验 | 保留局部几何细节 | 需要输入完整点云, 利用完整点云监督训练 | PCN datasets TopNet datasets |
SoftPoolNet [ | 2020 | 基于软池思想 | - | - | ShapeNet KITTI |
ASHF-Net[ | 2021 | 自适应采样和分层折叠 | ShapeNet KITTI | ||
VRCNet[ | 2021 | 概率建模 | MVP dataset | ||
AtlasNet[ | 2018 | 参数化映射曲面 | 生成任意分辨率的形状时 避免内存占用问题 | - | ShapeNets |
MSN[ | 2020 | 估计参数曲面 | 生成均匀分布点云并保持局部细节 | - | ShapeNets |
SAUM[ | 2020 | 双支对称感知 | 提高分辨率 | - | PCN datasets TopNet datasets KITTI |
方法 | 年份 | 思想 | 优点 | 数据集 |
---|---|---|---|---|
PF-Net[ | 2020 | 分形几何 | 优点保留局部特征 | ShapeNet PF-Net datasets |
NSFA[ | 2020 | 分离特征聚合 | 潜在形状预测 | ShapeNet KITTI |
SK-PCN[ | 2020 | 骨架点位移 | 保持形状拓扑和局部细节 | ShapeNet |
Table 5 PointNet++-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 数据集 |
---|---|---|---|---|
PF-Net[ | 2020 | 分形几何 | 优点保留局部特征 | ShapeNet PF-Net datasets |
NSFA[ | 2020 | 分离特征聚合 | 潜在形状预测 | ShapeNet KITTI |
SK-PCN[ | 2020 | 骨架点位移 | 保持形状拓扑和局部细节 | ShapeNet |
Fig. 5 Edge convolution network[38]((a) Computing an edge feature, eij from a point pair, xi and xj. hΘ() is instantiated using a fully connected layer, and the learnable parameters are its associated weights; (b) The EdgeConv operation, the output of EdgeConv is calculated by aggregating the edge features associated with all the edges emanating from each connected vertex)
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
ECG[ | 2020 | 点边缘感知 | 边缘恢复 | 错误判断光滑边缘 | ShapeNetPart TopNet datasets |
DCG[ | 2019 | 点特征边缘感知 | 细节恢复 | 忽略不连续边缘 | ShapeNet Core |
GGD[ | 2021 | 粗补全后边缘卷积 | 结果平滑 | 冗余特征 | PCN datasets KITTI Pandar40 |
PRSCN[ | 2021 | 分层组合特征 | 减少冗余信息 | - | ShapeNet |
DeCo[ | 2021 | 局部和全局信息组合输入编码 | 局部和全局相结合 | - | ShapeNet |
Table 6 Graph convolutional networks-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
ECG[ | 2020 | 点边缘感知 | 边缘恢复 | 错误判断光滑边缘 | ShapeNetPart TopNet datasets |
DCG[ | 2019 | 点特征边缘感知 | 细节恢复 | 忽略不连续边缘 | ShapeNet Core |
GGD[ | 2021 | 粗补全后边缘卷积 | 结果平滑 | 冗余特征 | PCN datasets KITTI Pandar40 |
PRSCN[ | 2021 | 分层组合特征 | 减少冗余信息 | - | ShapeNet |
DeCo[ | 2021 | 局部和全局信息组合输入编码 | 局部和全局相结合 | - | ShapeNet |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
RL-GAN-Net[ | 2019 | 强化学习 | 实时补全 | 精度不足 | ShapeNet |
SpareNet[ | 2021 | 可微点渲染 | 保留局部几何细节 | - | ShapeNet,KITTI |
UPCC-AD[ | 2019 | 无监督学习 | 未配对点补全 | 丢失细节 | ScanNet,Matterport3D,KITTI |
ShapeInversion[ | 2021 | 反转变换 | 保持一致性 | 耗时 | ShapeNet,KITTI,ScanNet,Matterport3D |
Cycle4Completion[ | 2021 | 循环变换 | - | 耗时 | ShapeNet,KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet,KITTI |
Table 7 Generative adversarial networks-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
RL-GAN-Net[ | 2019 | 强化学习 | 实时补全 | 精度不足 | ShapeNet |
SpareNet[ | 2021 | 可微点渲染 | 保留局部几何细节 | - | ShapeNet,KITTI |
UPCC-AD[ | 2019 | 无监督学习 | 未配对点补全 | 丢失细节 | ScanNet,Matterport3D,KITTI |
ShapeInversion[ | 2021 | 反转变换 | 保持一致性 | 耗时 | ShapeNet,KITTI,ScanNet,Matterport3D |
Cycle4Completion[ | 2021 | 循环变换 | - | 耗时 | ShapeNet,KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet,KITTI |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
Cycle4Completion[ | 2021 | 循环变换 | 保持一致性 | 耗时 | ShapeNet KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet KITTI |
PoinTr[ | 2021 | 局部几何感知 | 增强局部几何关系 | - | ShapeNet-55 ShapeNet-34 |
SnowflakeNet[ | 2021 | 点云分裂生成 | 增强上下文结构关系 | 内存冗余 | PCN datasets |
Table 8 Transformer-based methods for point cloud completion
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
Cycle4Completion[ | 2021 | 循环变换 | 保持一致性 | 耗时 | ShapeNet KITTI |
SLS[ | 2022 | 统一潜在空间 | 增强结构性 | - | ShapeNet KITTI |
PoinTr[ | 2021 | 局部几何感知 | 增强局部几何关系 | - | ShapeNet-55 ShapeNet-34 |
SnowflakeNet[ | 2021 | 点云分裂生成 | 增强上下文结构关系 | 内存冗余 | PCN datasets |
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 20.14 | 13.16 | 21.80 | 20.30 | 18.81 | 25.76 | 21.89 | 21.76 | 17.64 |
PCN[ | 18.22 | 9.79 | 22.70 | 12.43 | 25.14 | 22.72 | 20.26 | 20.27 | 11.73 |
AtlasNet[ | 17.77 | 10.36 | 23.40 | 13.40 | 24.16 | 20.24 | 20.82 | 17.52 | 11.62 |
TopNet[ | 14.25 | 7.32 | 18.77 | 12.88 | 19.82 | 14.60 | 16.29 | 14.89 | 8.82 |
SoftPoolNet[ | 11.90 | 4.89 | 18.86 | 10.17 | 15.22 | 12.34 | 14.87 | 11.84 | 6.48 |
SA-Net[ | 11.22 | 5.27 | 14.45 | 7.78 | 13.67 | 13.53 | 14.22 | 11.75 | 8.84 |
GR-Net[ | 10.64 | 6.13 | 16.90 | 8.27 | 12.23 | 10.22 | 14.93 | 10.08 | 5.86 |
MSN[ | 9.60 | 5.62 | 11.93 | 8.63 | 11.64 | 10.30 | 13.18 | 9.65 | 5.88 |
PMP-Net[ | 9.23 | 3.99 | 14.70 | 8.55 | 10.21 | 9.27 | 12.43 | 8.51 | 5.77 |
CR-Net[ | 9.21 | 3.38 | 13.17 | 8.31 | 10.62 | 10.00 | 12.86 | 9.16 | 5.80 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 8.23 | 3.59 | 12.06 | 7.68 | 9.37 | 8.76 | 11.22 | 7.65 | 5.48 |
SnowflakeNet[ | 7.60 | 3.48 | 11.09 | 6.90 | 8.75 | 8.42 | 10.15 | 6.46 | 5.32 |
SpareNet[ | 7.59 | 3.97 | 11.69 | 6.60 | 8.90 | 7.36 | 11.07 | 6.06 | 5.13 |
Table 9 Quantitative comparison for point cloud completion on eight categories objects of PCN dataset benchmark
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 20.14 | 13.16 | 21.80 | 20.30 | 18.81 | 25.76 | 21.89 | 21.76 | 17.64 |
PCN[ | 18.22 | 9.79 | 22.70 | 12.43 | 25.14 | 22.72 | 20.26 | 20.27 | 11.73 |
AtlasNet[ | 17.77 | 10.36 | 23.40 | 13.40 | 24.16 | 20.24 | 20.82 | 17.52 | 11.62 |
TopNet[ | 14.25 | 7.32 | 18.77 | 12.88 | 19.82 | 14.60 | 16.29 | 14.89 | 8.82 |
SoftPoolNet[ | 11.90 | 4.89 | 18.86 | 10.17 | 15.22 | 12.34 | 14.87 | 11.84 | 6.48 |
SA-Net[ | 11.22 | 5.27 | 14.45 | 7.78 | 13.67 | 13.53 | 14.22 | 11.75 | 8.84 |
GR-Net[ | 10.64 | 6.13 | 16.90 | 8.27 | 12.23 | 10.22 | 14.93 | 10.08 | 5.86 |
MSN[ | 9.60 | 5.62 | 11.93 | 8.63 | 11.64 | 10.30 | 13.18 | 9.65 | 5.88 |
PMP-Net[ | 9.23 | 3.99 | 14.70 | 8.55 | 10.21 | 9.27 | 12.43 | 8.51 | 5.77 |
CR-Net[ | 9.21 | 3.38 | 13.17 | 8.31 | 10.62 | 10.00 | 12.86 | 9.16 | 5.80 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 8.23 | 3.59 | 12.06 | 7.68 | 9.37 | 8.76 | 11.22 | 7.65 | 5.48 |
SnowflakeNet[ | 7.60 | 3.48 | 11.09 | 6.90 | 8.75 | 8.42 | 10.15 | 6.46 | 5.32 |
SpareNet[ | 7.59 | 3.97 | 11.69 | 6.60 | 8.90 | 7.36 | 11.07 | 6.06 | 5.13 |
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 14.21 | 7.38 | 20.95 | 17.64 | 12.48 | 13.90 | 12.63 | 13.52 | 15.21 |
PCN[ | 9.64 | 5.50 | 22.70 | 10.63 | 8.70 | 11.00 | 11.34 | 11.68 | 8.59 |
AtlasNet[ | 10.85 | 6.37 | 11.94 | 10.10 | 12.06 | 12.37 | 12.99 | 10.33 | 10.61 |
TopNet[ | 12.15 | 7.61 | 13.31 | 10.90 | 13.82 | 14.44 | 14.78 | 11.12 | 11.12 |
SoftPoolNet[ | 8.31 | 4.76 | 10.29 | 7.63 | 11.23 | 8.97 | 10.08 | 7.13 | 6.38 |
SA-Net[ | 8.24 | 6.18 | 9.11 | 5.56 | 8.94 | 7.83 | 9.98 | 9.94 | 7.23 |
GR-Net[ | 8.83 | 6.45 | 10.37 | 9.45 | 9.41 | 7.96 | 10.51 | 8.44 | 8.04 |
MSN[ | 8.76 | 5.87 | 10.85 | 9.50 | 9.67 | 7.39 | 10.68 | 8.61 | 7.51 |
PMP-Net[ | 8.73 | 5.56 | 11.24 | 9.64 | 9.51 | 6.95 | 10.83 | 8.72 | 7.25 |
CR-Net[ | 8.51 | 4.79 | 9.97 | 8.31 | 9.49 | 8.94 | 10.69 | 7.81 | 5.77 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 7.90 | 4.08 | 10.92 | 8.25 | 8.37 | 7.63 | 10.76 | 7.25 | 5.95 |
SnowflakeNet[ | 7.21 | 4.29 | 9.16 | 8.08 | 7.89 | 6.07 | 9.23 | 6.55 | 6.40 |
SpareNet[ | 7.23 | 4.16 | 9.18 | 7.63 | 7.53 | 7.03 | 9.53 | 6.35 | 6.48 |
Table 10 Quantitative comparison for point cloud completion on eight categories objects of completion3D benchmark
方法 | Avg | Airplane | Cabinet | Car | Chair | Lamp | Sofa | Table | Boat |
---|---|---|---|---|---|---|---|---|---|
3D-EPN[ | 14.21 | 7.38 | 20.95 | 17.64 | 12.48 | 13.90 | 12.63 | 13.52 | 15.21 |
PCN[ | 9.64 | 5.50 | 22.70 | 10.63 | 8.70 | 11.00 | 11.34 | 11.68 | 8.59 |
AtlasNet[ | 10.85 | 6.37 | 11.94 | 10.10 | 12.06 | 12.37 | 12.99 | 10.33 | 10.61 |
TopNet[ | 12.15 | 7.61 | 13.31 | 10.90 | 13.82 | 14.44 | 14.78 | 11.12 | 11.12 |
SoftPoolNet[ | 8.31 | 4.76 | 10.29 | 7.63 | 11.23 | 8.97 | 10.08 | 7.13 | 6.38 |
SA-Net[ | 8.24 | 6.18 | 9.11 | 5.56 | 8.94 | 7.83 | 9.98 | 9.94 | 7.23 |
GR-Net[ | 8.83 | 6.45 | 10.37 | 9.45 | 9.41 | 7.96 | 10.51 | 8.44 | 8.04 |
MSN[ | 8.76 | 5.87 | 10.85 | 9.50 | 9.67 | 7.39 | 10.68 | 8.61 | 7.51 |
PMP-Net[ | 8.73 | 5.56 | 11.24 | 9.64 | 9.51 | 6.95 | 10.83 | 8.72 | 7.25 |
CR-Net[ | 8.51 | 4.79 | 9.97 | 8.31 | 9.49 | 8.94 | 10.69 | 7.81 | 5.77 |
VE-PCN[ | 8.10 | 3.83 | 12.74 | 7.86 | 8.66 | 7.24 | 11.47 | 7.88 | 4.75 |
VRCNet[ | 7.90 | 4.08 | 10.92 | 8.25 | 8.37 | 7.63 | 10.76 | 7.25 | 5.95 |
SnowflakeNet[ | 7.21 | 4.29 | 9.16 | 8.08 | 7.89 | 6.07 | 9.23 | 6.55 | 6.40 |
SpareNet[ | 7.23 | 4.16 | 9.18 | 7.63 | 7.53 | 7.03 | 9.53 | 6.35 | 6.48 |
评价指标 | 方法 | ||||||||
---|---|---|---|---|---|---|---|---|---|
AtlasNet[ | PCN[ | TopNet[ | MSN[ | GR-Net[ | NSFA[ | CR-Net[ | VE-PCN[ | SpareNet[ | |
Consistency | 0.700 | 1.557 | 0.568 | 1.951 | 0.313 | 0.391 | 0.582 | 12.630 | 0.249 |
Fidelity | 1.759 | 2.235 | 5.354 | 0.434 | 0.816 | 0.347 | 0.337 | 0.258 | 1.461 |
MMD | 2.108 | 1.336 | 0.636 | 2.259 | 0.568 | 0.426 | 0.394 | 0.372 | 0.368 |
Table 11 Quantitative comparison on KITTI dataset in terms of consistency, fidelity and minimum matching distance
评价指标 | 方法 | ||||||||
---|---|---|---|---|---|---|---|---|---|
AtlasNet[ | PCN[ | TopNet[ | MSN[ | GR-Net[ | NSFA[ | CR-Net[ | VE-PCN[ | SpareNet[ | |
Consistency | 0.700 | 1.557 | 0.568 | 1.951 | 0.313 | 0.391 | 0.582 | 12.630 | 0.249 |
Fidelity | 1.759 | 2.235 | 5.354 | 0.434 | 0.816 | 0.347 | 0.337 | 0.258 | 1.461 |
MMD | 2.108 | 1.336 | 0.636 | 2.259 | 0.568 | 0.426 | 0.394 | 0.372 | 0.368 |
方法 | 年份 | 思想 | 优点 | 缺点 | |
---|---|---|---|---|---|
基于PointNet++网络的方法 | PU-Net[ | 2018 | 基于数据驱动 | 提高分辨率 | 无法估计点云的分布 |
EC-Net[ | 2018 | 增加边缘感知 | 解决边缘问题 | ||
PUGeo-Net[ | 2020 | 学习法向局部参数化 | 从低质量输入获取结果 | 无法应用到不完整数据集 | |
基于图卷积网络的方法 | 3PU[ | 2019 | 渐进式上采样 | 提高细节保留 | - |
PU-EVA[ | 2021 | 基于边缘向量 | 实现任意上采样率 | 无法应用到没有配对的数据集 | |
PU-GCN[ | 2021 | 设计多个上采样模块 | 不需要较多的参数实现 | - | |
基于GAN网络的方法 | PU-GAN[ | 2019 | 基于对抗网络 | 增强分布的均匀性 | 点云数量固定 |
Table 12 Point cloud upsampling method
方法 | 年份 | 思想 | 优点 | 缺点 | |
---|---|---|---|---|---|
基于PointNet++网络的方法 | PU-Net[ | 2018 | 基于数据驱动 | 提高分辨率 | 无法估计点云的分布 |
EC-Net[ | 2018 | 增加边缘感知 | 解决边缘问题 | ||
PUGeo-Net[ | 2020 | 学习法向局部参数化 | 从低质量输入获取结果 | 无法应用到不完整数据集 | |
基于图卷积网络的方法 | 3PU[ | 2019 | 渐进式上采样 | 提高细节保留 | - |
PU-EVA[ | 2021 | 基于边缘向量 | 实现任意上采样率 | 无法应用到没有配对的数据集 | |
PU-GCN[ | 2021 | 设计多个上采样模块 | 不需要较多的参数实现 | - | |
基于GAN网络的方法 | PU-GAN[ | 2019 | 基于对抗网络 | 增强分布的均匀性 | 点云数量固定 |
方法 | CD (10-2) | 时间(ms) |
---|---|---|
PU-Net[ | 5.56 | 10.04 |
3PU[ | 2.98 | 10.86 |
PU-GAN[ | 2.80 | 14.28 |
PU-GCN[ | 2.58 | 8.83 |
Table 13 Comparison of Point cloud upsampling by CD and time in Visionair dataset
方法 | CD (10-2) | 时间(ms) |
---|---|---|
PU-Net[ | 5.56 | 10.04 |
3PU[ | 2.98 | 10.86 |
PU-GAN[ | 2.80 | 14.28 |
PU-GCN[ | 2.58 | 8.83 |
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
DeepSDF[ | 2019 | 形状条件分类器 | 处理复杂拓扑 | 消耗大量时间和数据集 | ShapeNet |
IF-Net[ | 2020 | 对特征分类 | 保留细节 | 只适用于纹理补全 | 3DBodyTex.v2 |
SA-IFN[ | 2021 | 与自注意机制结合 | 处理复杂模型 | 只适用于牙齿模型 | 牙齿模型数据 |
Vaccine-style-net[ | 2020 | 连续边界函数 | 完整曲面模型 | 间接补全 | ShapeNet |
ShapeFormer[ | 2022 | 基于Transform | 处理复杂模型 | 耗时 | ShapeNet |
PatchNets[ | 2020 | 基于块训练 | 无需多数据集 | 缺少局部细节 | ShapeNet |
DeepLS[ | 2020 | 局部学习SDF | 减少所需内存 | 无法进行细节重建 | sketchup |
Table 14 Implicit function-based completion method
方法 | 年份 | 思想 | 优点 | 缺点 | 数据集 |
---|---|---|---|---|---|
DeepSDF[ | 2019 | 形状条件分类器 | 处理复杂拓扑 | 消耗大量时间和数据集 | ShapeNet |
IF-Net[ | 2020 | 对特征分类 | 保留细节 | 只适用于纹理补全 | 3DBodyTex.v2 |
SA-IFN[ | 2021 | 与自注意机制结合 | 处理复杂模型 | 只适用于牙齿模型 | 牙齿模型数据 |
Vaccine-style-net[ | 2020 | 连续边界函数 | 完整曲面模型 | 间接补全 | ShapeNet |
ShapeFormer[ | 2022 | 基于Transform | 处理复杂模型 | 耗时 | ShapeNet |
PatchNets[ | 2020 | 基于块训练 | 无需多数据集 | 缺少局部细节 | ShapeNet |
DeepLS[ | 2020 | 局部学习SDF | 减少所需内存 | 无法进行细节重建 | sketchup |
[1] | ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM: large-scale direct monocular SLAM[M]//Computer Vision - ECCV 2014. Cham: Springer International Publishing, 2014: 834-849. |
[2] | HOU J, DAI A, NIEßNER M. 3D-SIS: 3D semantic instance segmentation of RGB-D scans[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 4416-4425. |
[3] | BOUD A C, HANIFF D J, BABER C, et al. Virtual reality and augmented reality as a training tool for assembly tasks[C]// 1999 IEEE International Conference on Information Visualization. New York: IEEE Press, 1999: 32-36. |
[4] | CHANG A X, FUNKHOUSER T, GUIBAS L, et al. ShapeNet: an information-rich 3D model repository[EB/OL]. [2021-12-09]. https://arxiv.org/abs/1512.03012. |
[5] | YU X M, RAO Y M, WANG Z Y, et al. PoinTr: diverse point cloud completion with geometry-aware transformers[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 12478-12487. |
[6] | YUAN W T, KHOT T, HELD D, et al. PCN: point completion network[C]// 2018 International Conference on 3D Vision. New York: IEEE Press, 2018: 728-737. |
[7] | TCHAPMI L P, KOSARAJU V, REZATOFIGHI H, et al. TopNet: structural point cloud decoder[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 383-392. |
[8] | HUANG Z T, YU Y K, XU J W, et al. PF-net: point fractal network for 3D point cloud completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 7659-7667. |
[9] | ZHANG X C, FENG Y T, LI S Q, et al. View-guided point cloud completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 15885-15894. |
[10] | PAN L, CHEN X Y, CAI Z A, et al. Variational relational point completion network[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 8520-8529. |
[11] | WU Z R, SONG S R, KHOSLA A, et al. 3D ShapeNets: a deep representation for volumetric shapes[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2015: 1912-1920. |
[12] |
GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237.
DOI URL |
[13] | GU J Y, MA W C, MANIVASAGAM S, et al. Weakly- supervised 3D shape completion in the wild[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 283-299. |
[14] | DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly- annotated 3D reconstructions of indoor scenes[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 2432-2443. |
[15] |
ZHANG S L, LI S, HAO A M, et al. Point clou semantic scene completion from RGB-D images[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3385-3393.
DOI URL |
[16] | HU T, HAN Z Z, SHRIVASTAVA A, et al. Render4Completion: synthesizing multi-view depth maps for 3D shape completion[C]// 2019 IEEE/CVF International Conference on Computer Vision Workshop. New York: IEEE Press, 2019: 4114-4122. |
[17] |
HU T, HAN Z Z, ZWICKER M. 3D shape completion with multi-view consistent inference[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10997-11004.
DOI URL |
[18] | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
[19] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 5105-5114. |
[20] | DAI A, QI C R, NIEßNER M. Shape completion using 3D-encoder-predictor CNNs and shape synthesis[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 6545-6554. |
[21] | HAN X G, LI Z, HUANG H B, et al. High-resolution shape completion using deep neural networks for global structure and local geometry inference[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 85-93. |
[22] | GKIOXARI G, JOHNSON J, MALIK J. Mesh R-CNN[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 9784-9794. |
[23] | WANG P S, LIU Y, TONG X. Deep octree-based CNNs with output-guided skip connections for 3D shape and scene completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE Press, 2020: 1074-1081. |
[24] | XIE H Z, YAO H X, ZHOU S C, et al. GRNet: gridding residual network for dense point cloud completion[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 365-381. |
[25] | GUO Q, WANG Z J, JUEFEI-XU F, et al. CarveNet: carving point-block for complex 3D shape completion[EB/OL]. [2022-04-21]. https://arxiv.org/abs/2107.13452. |
[26] | WANG X G, ANG M H, LEE G H. Voxel-based network for shape completion by leveraging edge generation[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 13169-13178. |
[27] | WEN X, XIANG P, HAN Z Z, et al. PMP-net: point cloud completion by learning multi-step point moving paths[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7439-7448. |
[28] | WEN X, LI T Y, HAN Z Z, et al. Point cloud completion by skip-attention network with hierarchical folding[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 1936-1945. |
[29] | WANG X G, ANG M H, LEE G H. Cascaded refinement network for point cloud completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 787-796. |
[30] | WANG X G, ANG M H, LEE G H. Point cloud completion by learning shape priors[C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE Press, 2020: 10719-10726. |
[31] | WANG Y D, TAN D J, NAVAB N, et al. SoftPoolNet: shape descriptor for point cloud completion and classification[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 70-85. |
[32] |
ZONG D M, SUN S L, ZHAO J. ASHF-net: adaptive sampling and hierarchical folding network for robust point cloud completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3625-3632.
DOI URL |
[33] | GROUEIX T, FISHER M, KIM V G, et al. A papier-Mache approach to learning 3D surface generation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 216-224. |
[34] |
LIU M H, SHENG L, YANG S, et al. Morphing and sampling network for dense point cloud completion[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 11596-11603.
DOI URL |
[35] | SON H, KIM Y M. SAUM: symmetry-aware upsampling module for consistent point cloud completion[C]// Computer Vision - ACCV 2020: 15th Asian Conference on Computer Vision, Revised Selected Papers, Part I. New York: ACM, 2020: 158-174. |
[36] | ZHANG W X, YAN Q G, XIAO C X. Detail preserved point cloud completion via separated feature aggregation[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 512-528. |
[37] | NIE Y Y, LIN Y Q, HAN X G, et al. Skeleton-bridged point completion: from global inference to local adjustment[EB/OL]. [2022-04-21]. https://arxiv.org/abs/2010.07428. |
[38] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 146. |
[39] |
PAN L. ECG: edge-aware point cloud completion with graph convolution[J]. IEEE Robotics and Automation Letters, 2020, 5(3): 4392-4398.
DOI URL |
[40] | WANG K Q, CHEN K, JIA K. Deep cascade generation on point sets[C]// The 28th International Joint Conference on Artificial Intelligence. New York: ACM, 2019: 3726-3732. |
[41] |
SHI J Q, XU L Y, HENG L, et al. Graph-guided deformation for point cloud completion[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 7081-7088.
DOI URL |
[42] |
ZHU L P, WANG B Y, TIAN G Y, et al. Towards point cloud completion: point rank sampling and cross-cascade graph CNN[J]. Neurocomputing, 2021, 461: 1-16.
DOI URL |
[43] | ALLIEGRO A, VALSESIA D, FRACASTORO G, et al. Denoise and contrast for category agnostic shape completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4627-4636. |
[44] |
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
DOI URL |
[45] | SARMAD M, LEE H J, KIM Y M. RL-GAN-net: a reinforcement learning agent controlled GAN network for real-time point cloud shape completion[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 5891-5900. |
[46] | XIE C L, WANG C X, ZHANG B, et al. Style-based point generator with adversarial rendering for point cloud completion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 4617-4626. |
[47] | CHEN X L, CHEN B Q, MITRA N J. Unpaired point cloud completion on real scans using adversarial training[EB/OL]. [2022-04-21]. https://arxiv.org/abs/1904.00069. |
[48] | ZHANG J Z, CHEN X Y, CAI Z A, et al. Unsupervised 3D shape completion through GAN inversion[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 1768-1777. |
[49] | WEN X, HAN Z Z, CAO Y P, et al. Cycle4Completion: unpaired point cloud completion using cycle transformation with missing region coding[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 13075-13084. |
[50] | CAI Y J, LIN K Y, ZHANG C, et al. Learning a structured latent space for unsupervised point cloud completion[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 5533-5543. |
[51] | XIANG P, WEN X, LIU Y S, et al. SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 5479-5489. |
[52] | YU L Q, LI X Z, FU C W, et al. PU-net: point cloud upsampling network[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 2790-2799. |
[53] | YU L Q, LI X Z, FU C W, et al. EC-net: an edge-aware point set consolidation network[M]//Computer Vision - ECCV 2018. Cham: Springer International Publishing, 2018: 398-414. |
[54] | QIAN Y, HOU J H, KWONG S, et al. PUGeo-net: a geometry-centric network for 3D point cloud upsampling[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 752-769. |
[55] | WANG Y F, WU S H, HUANG H, et al. Patch-based progressive 3D point set upsampling[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 5951-5960. |
[56] | QIAN G C, ABUALSHOUR A, LI G H, et al. PU-GCN: point cloud upsampling using graph convolutional networks[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 11678-11687. |
[57] | LUO L Q, TANG L L, ZHOU W Y, et al. PU-EVA: an edge-vector based approximation solution for flexible-scale point cloud upsampling[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 16188-16197. |
[58] | LI R H, LI X Z, FU C W, et al. PU-GAN: a point cloud upsampling adversarial network[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 7202-7211. |
[59] | PARK J J, FLORENCE P, STRAUB J, et al. DeepSDF: learning continuous signed distance functions for shape representation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2019: 165-174. |
[60] | CHIBANE J, ALLDIECK T, PONS-MOLL G. Implicit functions in feature space for 3D shape reconstruction and completion[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 6968-6979. |
[61] | PING Y H, WEI G D, YANG L, et al. Self-attention implicit function networks for 3D dental data completion[J]. Computer Aided Geometric Design, 2021, 90: 1-12. |
[62] | YAN W, ZHANG R N, WANG J, et al. Vaccine-style-net: point cloud completion in implicit continuous function space[C]// The 28th ACM International Conference on Multimedia. New York: ACM, 2020: 2067-2075. |
[63] | YAN X G, LIN L Q, MITRA N J, et al. ShapeFormer: transformer-based shape completion via sparse representation[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 6229-6239. |
[64] | TRETSCHK E, TEWARI A, GOLYANIK V, et al. PatchNets: patch-based generalizable deep implicit 3D shape representations[C]// Computer Vision - ECCV 2020: 16th European Conference. New York: ACM, 2020: 293-309. |
[65] | CHABRA R, LENSSEN J E, ILG E, et al. Deep local shapes: learning local SDF priors for detailed 3D reconstruction[M]// Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 608-625. |
[1] | BI Chun-yan, LIU Yue. A survey of video human action recognition based on deep learning [J]. Journal of Graphics, 2023, 44(4): 625-639. |
[2] |
CAO Yi-qin , ZHOU Yi-wei , XU Lu.
A real-time metallic surface defect detection algorithm based on E-YOLOX
[J]. Journal of Graphics, 2023, 44(4): 677-690.
|
[3] |
LI Xin , PU Yuan-yuan, ZHAO Zheng-peng , XU Dan , QIAN Wen-hua.
Content semantics and style features match consistent
artistic style transfer
[J]. Journal of Graphics, 2023, 44(4): 699-709.
|
[4] |
SHAO Jun-qi, QIAN Wen-hua, XU Qi-hao .
Landscape image generation based on conditional residual
generative adversarial network
[J]. Journal of Graphics, 2023, 44(4): 710-717.
|
[5] | DENG Wei-ming , YANG Tie-jun , LI Chun-chun , HUANG Lin. Object detection for nameplate based on neural architecture search [J]. Journal of Graphics, 2023, 44(4): 718-727. |
[6] |
YU Wei-qun, LIU Jia-tao, ZHANG Ya-ping.
Monocular depth estimation based on Laplacian
pyramid with attention fusion
[J]. Journal of Graphics, 2023, 44(4): 728-738.
|
[7] |
GUO Yin-hong, WANG Li-chun, LI Shuang.
Image feature matching based on repeatability and
specificity constraints
[J]. Journal of Graphics, 2023, 44(4): 739-746.
|
[8] | MAO Ai-kun, LIU Xin-ming, CHEN Wen-zhuang, SONG Shao-lou. Improved substation instrument target detection method for YOLOv5 algorithm [J]. Journal of Graphics, 2023, 44(3): 448-455. |
[9] | WANG Jia-jing, WANG Chen, ZHU Yuan-yuan, WANG Xiao-mei. Graph element detection matching based on Republic of China banknotes [J]. Journal of Graphics, 2023, 44(3): 492-501. |
[10] | ZENG Wu, ZHU Heng-liang, XING Shu-li, LIN Jiang-hong, MAO Guo-jun. Saliency detection-guided for image data augmentation [J]. Journal of Graphics, 2023, 44(2): 260-270. |
[11] | LUO Qi-ming, WU Hao, XIA Xin, YUAN Guo-wu. Prediction of damaged areas in Yunnan murals using Dual Dense U-Net [J]. Journal of Graphics, 2023, 44(2): 304-312. |
[12] | LI Hong-an , ZHENG Qiao-xue , TAO Ruo-lin , ZHANG Min , LI Zhan-li , KANG Bao-sheng. Review of image super-resolutionbased on deep learning [J]. Journal of Graphics, 2023, 44(1): 1-15. |
[13] |
SHAN Fang-mei , WANG Meng-wen , LI Min.
Multi-scale convolutional neural network incorporating attention
mechanism for intestinal polyp segmentation
[J]. Journal of Graphics, 2023, 44(1): 50-58.
|
[14] |
SHAO Ying-jie, YIN Hui, XIE Ying, HUANG Hua.
A sketch-guided facial image completion network via
selective recurrent inference
[J]. Journal of Graphics, 2023, 44(1): 67-76.
|
[15] |
PAN Dong-hui, JIN Ying-han, SUN Xu, LIU Yu-sheng, ZHANG Dong-liang.
CTH-Net: CNN-Transformer hybrid network for garment image
generation from sketches and color points
[J]. Journal of Graphics, 2023, 44(1): 120-130.
|
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||