Journal of Graphics ›› 2025, Vol. 46 ›› Issue (3): 602-613.DOI: 10.11996/JG.j.2095-302X.2025030602
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
LIU Hongshuo1(), BAI Jing1,2,3(
), YAN Hao1, LIN Gan1
Received:
2024-08-22
Accepted:
2025-01-10
Online:
2025-06-30
Published:
2025-06-13
Contact:
BAI Jing
About author:
First author contact:LIU Hongshuo (2001-), master student. His main research interest covers 3D point cloud fine-grained classification. E-mail:1709671541@qq.com
Supported by:
CLC Number:
LIU Hongshuo, BAI Jing, YAN Hao, LIN Gan. BGS-Net: fine-grained classification networks with balanced generalization and specialization for 3D point clouds[J]. Journal of Graphics, 2025, 46(3): 602-613.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2025030602
Fig. 1 BGS-Net Network Framework ((a) Upstream self-supervised network; (b) Downstream classification network; (c) Input preprocessing and initial feature extraction; (d) Mini-PointNet)
方法分类 | 算法 | 年份 | ModelNet40/% | FG3D/% | ||
---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||
自监督 | Point-BERT[ | 2020 | 92.70 | - | - | - |
PointGLR[ | 2021 | 93.00 | - | - | - | |
OcCo[ | 2020 | 93.00+voting | - | - | - | |
MaskPoint[ | 2020 | 93.80+voting | - | - | - | |
Point-MAE[ | 2021 | 93.20 | - | - | - | |
Point2Vec[ | 2023 | 94.00 | - | - | - | |
元类别 | SO-Net[ | 2018 | 90.90 | 82.92 | 59.32 | 70.05 |
Point2Sequence[ | 2018 | 92.60 | 92.76 | 73.54 | 79.12 | |
PointCNN[ | 2018 | 91.70 | 90.30 | 68.37 | 74.87 | |
PointNet[ | 2017 | 89.20 | 89.34 | 73.00 | 75.44 | |
DGCNN[ | 2019 | 92.20 | 93.60 | 72.10 | 79.53 | |
MSP-Net[ | 2019 | 91.73 | 93.03 | 74.25 | 68.69 | |
poinAtrousGraph[ | 2020 | 93.10 | 95.22 | 74.77 | 79.20 | |
Point2SpatialCapsule[ | 2020 | 93.40 | 95.19 | 75.92 | 79.53 | |
PointNet++(MSG)[ | 2018 | 90.70 | 95.96 | 77.87 | 81.23 | |
PointTransformer[ | 2020 | 93.70 | 91.53 | 67.88 | 71.73 | |
PCT[ | 2020 | 93.20 | 95.16 | 78.89 | 81.37 | |
PointMLP[ | 2022 | 94.50 | 95.76 | 76.35 | 81.81 | |
细粒度 | FGP-Net[ | 2023 | - | 95.77 | 77.94 | 80.88 |
FGPNet[ | 2023 | 91.18 | 96.07 | 79.46 | 82.49 | |
DC-Net[ | 2023 | 92.41 | 97.31 | 79.15 | 83.67 | |
Ours | 2024 | 94.03 | 97.68 | 79.62 | 84.04 |
Table 1 Comparison of classification accuracy on ModelNet-40 dataset and FG3D dataset
方法分类 | 算法 | 年份 | ModelNet40/% | FG3D/% | ||
---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||
自监督 | Point-BERT[ | 2020 | 92.70 | - | - | - |
PointGLR[ | 2021 | 93.00 | - | - | - | |
OcCo[ | 2020 | 93.00+voting | - | - | - | |
MaskPoint[ | 2020 | 93.80+voting | - | - | - | |
Point-MAE[ | 2021 | 93.20 | - | - | - | |
Point2Vec[ | 2023 | 94.00 | - | - | - | |
元类别 | SO-Net[ | 2018 | 90.90 | 82.92 | 59.32 | 70.05 |
Point2Sequence[ | 2018 | 92.60 | 92.76 | 73.54 | 79.12 | |
PointCNN[ | 2018 | 91.70 | 90.30 | 68.37 | 74.87 | |
PointNet[ | 2017 | 89.20 | 89.34 | 73.00 | 75.44 | |
DGCNN[ | 2019 | 92.20 | 93.60 | 72.10 | 79.53 | |
MSP-Net[ | 2019 | 91.73 | 93.03 | 74.25 | 68.69 | |
poinAtrousGraph[ | 2020 | 93.10 | 95.22 | 74.77 | 79.20 | |
Point2SpatialCapsule[ | 2020 | 93.40 | 95.19 | 75.92 | 79.53 | |
PointNet++(MSG)[ | 2018 | 90.70 | 95.96 | 77.87 | 81.23 | |
PointTransformer[ | 2020 | 93.70 | 91.53 | 67.88 | 71.73 | |
PCT[ | 2020 | 93.20 | 95.16 | 78.89 | 81.37 | |
PointMLP[ | 2022 | 94.50 | 95.76 | 76.35 | 81.81 | |
细粒度 | FGP-Net[ | 2023 | - | 95.77 | 77.94 | 80.88 |
FGPNet[ | 2023 | 91.18 | 96.07 | 79.46 | 82.49 | |
DC-Net[ | 2023 | 92.41 | 97.31 | 79.15 | 83.67 | |
Ours | 2024 | 94.03 | 97.68 | 79.62 | 84.04 |
算法 | 5-way | 10-way | ||
---|---|---|---|---|
10-shot | 20-shot | 10-shot | 20-shot | |
OcCo[ | 91.9±3.6 | 93.9±3.1 | 86.4±5.4 | 91.3±4.6 |
Transf.-OcCo[ | 94.0±3.6 | 95.9±2.3 | 89.4±5.1 | 92.4±4.6 |
Point-BERT[ | 94.6±3.1 | 96.3±2.7 | 91.0±5.4 | 92.7±5.1 |
MaskPoint[ | 95.0±3.7 | 97.2±1.7 | 91.4±4.0 | 93.4±3.5 |
Point-MAE[ | 96.3±2.5 | 97.8±1.8 | 92.6±4.1 | 95.0±3.0 |
Point-M2AE[ | 96.8±1.8 | 98.3±1.4 | 92.3±4.5 | 95.0±3.0 |
Point2Vec[ | 97.0±2.8 | 98.7±1.2 | 93.9±4.1 | 95.8±3.1 |
Ours | 97.2±2.8 | 98.9±1.1 | 95.05±4.95 | 96.7±2.3 |
Table 2 Comparison of classification accuracy on the ModelNet-40 few-shot classification dataset/%
算法 | 5-way | 10-way | ||
---|---|---|---|---|
10-shot | 20-shot | 10-shot | 20-shot | |
OcCo[ | 91.9±3.6 | 93.9±3.1 | 86.4±5.4 | 91.3±4.6 |
Transf.-OcCo[ | 94.0±3.6 | 95.9±2.3 | 89.4±5.1 | 92.4±4.6 |
Point-BERT[ | 94.6±3.1 | 96.3±2.7 | 91.0±5.4 | 92.7±5.1 |
MaskPoint[ | 95.0±3.7 | 97.2±1.7 | 91.4±4.0 | 93.4±3.5 |
Point-MAE[ | 96.3±2.5 | 97.8±1.8 | 92.6±4.1 | 95.0±3.0 |
Point-M2AE[ | 96.8±1.8 | 98.3±1.4 | 92.3±4.5 | 95.0±3.0 |
Point2Vec[ | 97.0±2.8 | 98.7±1.2 | 93.9±4.1 | 95.8±3.1 |
Ours | 97.2±2.8 | 98.9±1.1 | 95.05±4.95 | 96.7±2.3 |
算法 | OBJ-BG | OBJ-ONLY | PB-T50-RS |
---|---|---|---|
Transf.-OcCo[ | 84.90 | 85.50 | 78.80 |
Point-BERT[ | 87.40 | 88.10 | 83.10 |
MaskPoint[ | 89.30 | 89.70 | 84.60 |
Point-MAE[ | 90.00 | 88.30 | 85.20 |
Point2Vec[ | 91.20 | 90.40 | 87.50 |
Ours | 91.02 | 90.33 | 85.98 |
Table 3 Comparison of classification accuracy in Real-Scene Scenarios on ScanObjectNN/%
算法 | OBJ-BG | OBJ-ONLY | PB-T50-RS |
---|---|---|---|
Transf.-OcCo[ | 84.90 | 85.50 | 78.80 |
Point-BERT[ | 87.40 | 88.10 | 83.10 |
MaskPoint[ | 89.30 | 89.70 | 84.60 |
Point-MAE[ | 90.00 | 88.30 | 85.20 |
Point2Vec[ | 91.20 | 90.40 | 87.50 |
Ours | 91.02 | 90.33 | 85.98 |
Fig. 3 Comparison of logic values of different methods on different datasets in FG3D (blue class names represent the correct category corresponding to the model, while red represents the incorrect category; visualizations of some 3D point cloud models and class names from the Airplane, Car, and Chair datasets are shown at the bottom and on the right, respectively)
实验 | 上游 学生机1 | 上游 学生机2 | 下游 学生机1 | 下游 学生机2 | FG3D/% | Modelnet40/% | ||
---|---|---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||||
① | 随机掩码 | - | 非冻结 | - | 97.27 | 78.48 | 82.90 | 94.00 |
② | 随机掩码 | 随机掩码 | 非冻结 | - | 97.32 | 79.01 | 83.09 | 93.66 |
③ | 随机掩码 | 随机掩码 | 冻结 | 非冻结 | 97.40 | 79.40 | 83.73 | 93.52 |
④ | 随机掩码 | 中心变换 | 冻结 | 非冻结 | 97.27 | 79.43 | 83.78 | 93.23 |
⑤ | 随机掩码 | 中心周围点掩码 | 冻结 | 非冻结 | 96.99 | 79.01 | 82.75 | 93.92 |
⑥ | 耦合掩码 | 耦合掩码 | 冻结 | 非冻结 | 97.68 | 79.62 | 84.04 | 94.03 |
Table 4 Overall network structure and ablation of upstream masking methods
实验 | 上游 学生机1 | 上游 学生机2 | 下游 学生机1 | 下游 学生机2 | FG3D/% | Modelnet40/% | ||
---|---|---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||||
① | 随机掩码 | - | 非冻结 | - | 97.27 | 78.48 | 82.90 | 94.00 |
② | 随机掩码 | 随机掩码 | 非冻结 | - | 97.32 | 79.01 | 83.09 | 93.66 |
③ | 随机掩码 | 随机掩码 | 冻结 | 非冻结 | 97.40 | 79.40 | 83.73 | 93.52 |
④ | 随机掩码 | 中心变换 | 冻结 | 非冻结 | 97.27 | 79.43 | 83.78 | 93.23 |
⑤ | 随机掩码 | 中心周围点掩码 | 冻结 | 非冻结 | 96.99 | 79.01 | 82.75 | 93.92 |
⑥ | 耦合掩码 | 耦合掩码 | 冻结 | 非冻结 | 97.68 | 79.62 | 84.04 | 94.03 |
实验 | 掩码率/% | 掩码重合度/% | 共享权重 | FG3D/% | Modelnet40/% | ||
---|---|---|---|---|---|---|---|
Airplane | Car | Chair | |||||
① | 65 | 15 | × | 96.72 | 79.40 | 83.32 | 93.52 |
② | 50 | 0 | × | 95.65 | 78.71 | 79.59 | 91.82 |
③ | 50 | 0 | √ | 97.68 | 79.62 | 84.04 | 94.03 |
Table 5 Weight sharing methods for ablation study in upstream tasks
实验 | 掩码率/% | 掩码重合度/% | 共享权重 | FG3D/% | Modelnet40/% | ||
---|---|---|---|---|---|---|---|
Airplane | Car | Chair | |||||
① | 65 | 15 | × | 96.72 | 79.40 | 83.32 | 93.52 |
② | 50 | 0 | × | 95.65 | 78.71 | 79.59 | 91.82 |
③ | 50 | 0 | √ | 97.68 | 79.62 | 84.04 | 94.03 |
实验 | 任务不可知编码器掩码率 | 任务特化编码器掩码率 | FG3D | Modelnet40 | ||
---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||
① | - | - | 97.33 | 78.94 | 83.31 | 93.65 |
② | 65 | 65 | 97.35 | 78.99 | 83.68 | 93.72 |
④ | 随机掩码 | - | 97.45 | 79.22 | 83.45 | 93.63 |
⑤ | - | 随机掩码 | 97.68 | 79.62 | 84.04 | 94.03 |
Table 6 Ablation of Downstream Masking Methods/%
实验 | 任务不可知编码器掩码率 | 任务特化编码器掩码率 | FG3D | Modelnet40 | ||
---|---|---|---|---|---|---|
Airplane | Car | Chair | ||||
① | - | - | 97.33 | 78.94 | 83.31 | 93.65 |
② | 65 | 65 | 97.35 | 78.99 | 83.68 | 93.72 |
④ | 随机掩码 | - | 97.45 | 79.22 | 83.45 | 93.63 |
⑤ | - | 随机掩码 | 97.68 | 79.62 | 84.04 | 94.03 |
实验 | 损失 函数 | FG3D | Modelnet40 | ||
---|---|---|---|---|---|
Airplane | Car | Chair | |||
① | Null | 97.31 | 78.87 | 83.47 | 92.98 |
② | L1 | 97.27 | 79.42 | 83.77 | 93.75 |
③ | L2 | 97.13 | 78.56 | 83.16 | 93.84 |
④ | SL1 | 97.68 | 79.62 | 84.04 | 94.03 |
Table 7 TSask specialization encoder random masking (RM) loss ablation study/%
实验 | 损失 函数 | FG3D | Modelnet40 | ||
---|---|---|---|---|---|
Airplane | Car | Chair | |||
① | Null | 97.31 | 78.87 | 83.47 | 92.98 |
② | L1 | 97.27 | 79.42 | 83.77 | 93.75 |
③ | L2 | 97.13 | 78.56 | 83.16 | 93.84 |
④ | SL1 | 97.68 | 79.62 | 84.04 | 94.03 |
[1] | 白静, 邵会会, 姬卉, 等. 面向三维点云的端到端细粒度分类网络[J]. 计算机辅助设计与图形学学报, 2023, 35(1): 128-134. |
BAI J, SHAO H H, JI H, et al. An end-to-end fine-grained classification network for 3D point clouds[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(1): 128-134 (in Chinese). | |
[2] | WU R S, BAI J, LI W J, et al. DCNet: exploring fine-grained vision classification for 3D point clouds[J]. The Visual Computer, 2024, 40(2): 781-797. |
[3] | SHAO H H, BAI J, WU R S, et al. FGPNet: a weakly supervised fine-grained 3D point clouds classification network[J]. Pattern Recognition, 2023, 139: 109509. |
[4] | LI Y, LIU Z, CHANG X J, et al. Diversity-boosted generalization-specialization balancing for zero-shot learning[J]. IEEE Transactions on Multimedia, 2023, 25: 8372-8382. |
[5] | JIANG S W, XU T F, GUO J, et al. Tree-CNN: from generalization to specialization[J]. EURASIP Journal on Wireless Communications and Networking, 2018, 2018: 216. |
[6] | FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[EB/OL]. [2024-06-22]https://dblp.uni-trier.de/db/conf/icml/icml2017.html#FinnAL17. |
[7] | WANG H C, LIU Q, YUE X Y, et al. Unsupervised point cloud pre-training via occlusion completion[C]// The IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 9762-9772. |
[8] | YU X M, TANG L L, RAO Y M, et al. Point-BERT: pre-training 3D point cloud transformers with masked point modeling[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 19291-19300. |
[9] | PANG Y T, WANG W X, TAY F E H, et al. Masked autoencoders for point cloud self-supervised learning[C]// The 17th European Conference on Computer Vision. Cham: Springer, 2022: 604-621. |
[10] | ZHANG Y B, LIN J H, HE C H, et al. Masked surfel prediction for self-supervised point cloud learning[EB/OL]. [2024-06-22]https://arxiv.org/abs/2207.03111. |
[11] | ZEID K A, SCHULT J, HERMANS A, et al. Point2Vec for self-supervised representation learning on point clouds[C]// The 45th DAGM German Conference on Pattern Recognition. Cham: Springer, 2024: 131-146. |
[12] | XIE Q X, LUONG M T, HOVY E, et al. Self-training with noisy student improves ImageNet classification[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 10684-10695. |
[13] | QI C R, SU H, MO K, et al. Pointnet: deep learning on point sets for 3d classification and segmentation[C]// IEEE conference on computer vision and pattern recognition. New York: IEEE Press, 2017: 652-660. |
[14] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017:, 5105-5114. |
[15] | ZHAO H S, JIANG L, JIA J Y, et al. Point transformer[C]// The IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 16239-16248. |
[16] | GUO M H, CAI J X, LIU Z N, et al. PCT: point cloud transformer[J]. Computational Visual Media, 2021, 7(2): 187-199. |
[17] | MA X, QIN C, YOU H X, et al. Rethinking network design and local geometry in point cloud: a simple residual MLP framework[EB/OL]. [2024-06-22]https://dblp.uni-trier.de/db/conf/iclr/iclr2022.html#MaQYR022. |
[18] | FU J L, ZHENG H L, MEI T. Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition[C]// The IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 4476-4484. |
[19] | ZHENG H L, FU J L, MEI T, et al. Learning multi-attention convolutional neural network for fine-grained image recognition[C]// The IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 5219-5227. |
[20] | LI M, LEI L, SUN H, et al. Fine-grained visual classification via multilayer bilinear pooling with object localization[J]. The Visual Computer, 2022, 38(3): 811-820. |
[21] | ZHUANG P Q, WANG Y L, QIAO Y. Learning attentive pairwise interaction for fine-grained classification[C]// The AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2020: 13130-13137. |
[22] | CHANG D L, DING Y F, XIE J Y, et al. The devil is in the channels: mutual-channel loss for fine-grained image classification[J]. IEEE Transactions on Image Processing, 2020, 29: 4683-4695. |
[23] | XIE S N, GU J T, GUO D M, et al. PointContrast: unsupervised pre-training for 3D point cloud understanding[C]// The 16th European Conference on Computer Vision. Cham: Springer, 2020: 574-591. |
[24] | ZHANG Z W, GIRDHAR R, JOULIN A, et al. Self-supervised pretraining of 3D features on any point-cloud[C]// The IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 10232-10243. |
[25] | HUANG S Y, XIE Y C, ZHU S C, et al. Spatio-temporal self-supervised representation learning for 3D point clouds[C]// The IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 6515-6525. |
[26] | MERSCH B, CHEN X, BEHLEY J, et al. Self-supervised point cloud prediction using 3D spatio-temporal convolutional networks[EB/OL]. [2024-06-22]https://dblp.uni-trier.de/db/conf/corl/corl2021.html#MerschCBS21. |
[27] | ZHANG R R, GUO Z Y, ZHANG W, et al. PointCLIP: point cloud understanding by CLIP[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 8542-8552. |
[28] | MITTAL H, OKORN B, HELD D. Just go with the flow: self-supervised scene flow estimation[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 11174-11182. |
[29] | LI R B, ZHANG C, LIN G S, et al. RigidFlow: self-supervised scene flow learning on point clouds by local rigidity prior[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 16938-16947. |
[30] | VASWANI A. Attention is all you need[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010. |
[31] | HE K M, FAN H Q, WU Y X, et al. Momentum contrast for unsupervised visual representation learning[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 9726-9735. |
[32] | CHANG A X, FUNKHOUSER T, GUIBAS L, et al. ShapeNet:an information-rich 3D model repository[EB/OL]. [2024-06-22]https://arxiv.org/abs/1512.03012. |
[33] |
LIU X H, HAN Z Z, LIU Y S, et al. Fine-grained 3D shape classification with hierarchical part-view attention[J]. IEEE Transactions on Image Processing, 2021, 30: 1744-1758.
DOI PMID |
[34] | SUN J C, ZHANG Q Z, KAILKHUNRA B, et al. ModelNet40-C: a robustness benchmark for 3D point cloud recognition under corruption[EB/OL]. [2024-06-22]https://iclr.cc/virtual/2022/8408. |
[35] | UY M A, PHAM Q H, HUA B S, et al. Revisiting point cloud classification: a new benchmark dataset and classification model on real-world data[C]// The IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 1588-1597. |
[36] | SHARMA C, KAUL M. Self-supervised few-shot learning on point clouds[C]// The 34th International Conference on Neural Information Processing Systems. New York: ACM, 2020: 605. |
[37] | LOSHCHILOV I, HUTTER F. Decoupled weight decay regularization[EB/OL]. [2024-06-22]https://dblp.uni-trier.de/db/conf/iclr/iclr2019.html#LoshchilovH19. |
[38] | LOSHCHILOV I, HUTTER F. GDR: stochastic gradient descent with warm restarts[EB/OL]. [2024-06-22]https://dblp.uni-trier.de/db/conf/iclr/iclr2017.html#LoshchilovH17. |
[39] |
蒋杰, 熊昌镇. 一种数据增强和多模型集成的细粒度分类算法[J]. 图学学报, 2018, 39(2): 244-250.
DOI |
JIANG J, XIONG C Z. Data augmentation with multi-model ensemble for fine-grained category classification[J]. Journal of Graphics, 2018, 39(2): 244-250 (in Chinese).
DOI |
|
[40] | 常东良, 尹军辉, 谢吉洋, 等. 面向图像分类的基于注意力引导的Dropout[J]. 图学学报, 2021, 42(1): 32-36. |
CHANG D L, YIN J H, XIE J Y, et al. Attention-guided Dropout for image classification[J]. Journal of Graphics, 2021, 42(1): 32-36 (in Chinese).
DOI |
|
[41] |
韩亚振, 尹梦晓, 马伟钊, 等. DGOA: 基于动态图和偏移注意力的点云上采样[J]. 图学学报, 2024, 45(1): 219-229.
DOI |
HAN Y Z, YIN M X, MA W Z, et al. DGOA: point cloud upsampling based on dynamic graph and offset attention[J]. Journal of Graphics, 2024, 45(1): 219-229 (in Chinese).
DOI |
|
[42] | LI H L, LIN B, ZHANG C, et al. Mask-point: automatic 3D surface defects detection network for fiber-reinforced resin matrix composites[J]. Polymers, 2022, 14(16): 3390. |
[43] | LI J X, CHEN B M, LEE G H. SO-Net: self-organizing network for point cloud analysis[C]// The IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 9397-9406. |
[44] | LIU X H, HAN Z Z, LIU Y S, et al. Point2Sequence: learning the shape representation of 3D point clouds with an attention-based sequence to sequence network[C]// The 33rd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2019: 8778-8785. |
[45] | LI Y Y, BU R, SUN M C, et al. PointCNN: convolution on X-transformed points[C]// The 32nd International Conference on Neural Information Processing Systems. New York: ACM, 2018: 828-838. |
[46] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 146. |
[47] | 白静, 徐浩钧. MSP-Net: 多尺度点云分类网络[J]. 计算机辅助设计与图形学学报, 2019, 31(11): 1917-1924. |
BAI J, XU H J. MSP-Net: multi-scale point cloud classification network[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(11): 1917-1924 (in Chinese). | |
[48] | PAN L, CHEW C M, LEE G H. PointAtrousGraph: deep hierarchical encoder-decoder with point atrous convolution for unorganized 3D points[C]// 2020 IEEE International Conference on Robotics and Automation. New York: IEEE Press, 2020: 1113-1120. |
[49] | WEN X, HAN Z Z, LIU X H, et al. Point2SpatialCapsule: aggregating features and spatial relationships of local regions on point clouds using spatial-aware capsules[J]. IEEE Transactions on Image Processing, 2020, 29: 8855-8869. |
[50] | ZHANG R R, GUO Z Y, GAO P Y, et al. Point-M2AE: multi-scale masked autoencoders for hierarchical point cloud pre-training[C]// The 36th International Conference on Neural Information Processing Systems. New York: ACM, 2022: 1962. |
[1] | WANG Xueting, GUO Xin, WANG Song, CHEN Enqing. Human skeleton action recognition method based on variational autoencoder masked reconstruction [J]. Journal of Graphics, 2025, 46(2): 270-278. |
[2] | LIN Xiao, ZHANG Qiuyang, ZHENG Xiaomei, YANG Qizhe. Self-supervised active label cleaning [J]. Journal of Graphics, 2024, 45(3): 495-504. |
[3] | AN Feng , DAI Jun, HAN Zhen , YAN Zhong-xing. Self-supervised optical flow estimation with attention module [J]. Journal of Graphics, 2022, 43(5): 841-848. |
[4] | LI Ni-ni, WANG Xia-li, FU Yang-yang, ZHENG Feng-xian, HE Dan-dan, YUAN Shao-xin. A traffic police object detection method based on optimized YOLO model [J]. Journal of Graphics, 2022, 43(2): 296-305. |
[5] | ZHU Wen-jia1, CHEN Yu-hong2, FENG Yu-jin2, WANG Jun3, YU Ye3 . A Vehicle Logo Recognition Method Based on Objective Optimization [J]. Journal of Graphics, 2019, 40(4): 689-696. |
[6] | YANG Shi-qiang, QIAO Dan, GONG Lu-qi, LI Xiao-li, LI De-xin . Knuckle Image Offset Measure Feature Learning Based on Laplace Approximation Gaussian Processes [J]. Journal of Graphics, 2019, 40(3): 574-582. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||