Journal of Graphics ›› 2023, Vol. 44 ›› Issue (3): 560-569.DOI: 10.11996/JG.j.2095-302X.2023030560
• Computer Graphics and Virtual Reality • Previous Articles Next Articles
ZHAO Yu-kun(
), REN Shuang(
), ZHANG Xin-yun
Received:2022-10-27
Accepted:2022-12-11
Online:2023-06-30
Published:2023-06-30
Contact:
REN Shuang (1981-), associate professor, Ph.D. His main research interests cover machine learning, computer vision, virtual reality technology, etc. E-mail:sren@bjtu.edu.cn
About author:ZHAO Yu-kun (1999-), master student. Her main research interests cover 3D point cloud adversarial attack and defense. E-mail:yukun0125@bjtu.edu.cn
Supported by:CLC Number:
ZHAO Yu-kun, REN Shuang, ZHANG Xin-yun. A 3D point cloud defense framework combined with adversarial examples detection and reconstruction[J]. Journal of Graphics, 2023, 44(3): 560-569.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.txxb.com.cn/EN/10.11996/JG.j.2095-302X.2023030560
Fig. 6 Comparison of normal and adversarial examples in the detector before and after reconstruction ((a) Normal samples; (b) Add-CD; (c) Drop-200; (d) LG-GAN)
| 攻击方法 | PointNet↓ | PointNet++↓ | DGCNN↓ | PointConv↓ |
|---|---|---|---|---|
| 无攻击 | 2.73 | 2.73 | 2.73 | 2.73 |
| Add-CD | 3.36 | 3.62 | 3.91 | 3.42 |
| Add-HD | 3.45 | 3.78 | 4.81 | 3.64 |
| Drop-100 | 3.28 | 3.25 | 3.26 | 3.23 |
| Drop-200 | 3.40 | 3.32 | 3.34 | 3.42 |
| LG-GAN | 6.24 | 6.22 | 7.23 | 7.40 |
Table 1 The average EMD distance of the point clouds before and after reconstruction under various attacks (×10?3)
| 攻击方法 | PointNet↓ | PointNet++↓ | DGCNN↓ | PointConv↓ |
|---|---|---|---|---|
| 无攻击 | 2.73 | 2.73 | 2.73 | 2.73 |
| Add-CD | 3.36 | 3.62 | 3.91 | 3.42 |
| Add-HD | 3.45 | 3.78 | 4.81 | 3.64 |
| Drop-100 | 3.28 | 3.25 | 3.26 | 3.23 |
| Drop-200 | 3.40 | 3.32 | 3.34 | 3.42 |
| LG-GAN | 6.24 | 6.22 | 7.23 | 7.40 |
| 分类模型 | α=1 | α=0.1 | α=0.05 | α=0.01 |
|---|---|---|---|---|
| PointNet | 75.34 | 83.62 | 86.42 | 84.13 |
| PointNet++ | 63.12 | 67.22 | 70.24 | 65.79 |
| DGCNN | 72.95 | 78.87 | 83.44 | 81.19 |
| PointConv | 73.17 | 75.60 | 81.18 | 76.55 |
Table 2 Classification accuracy of different models under different weight coefficient α (%)
| 分类模型 | α=1 | α=0.1 | α=0.05 | α=0.01 |
|---|---|---|---|---|
| PointNet | 75.34 | 83.62 | 86.42 | 84.13 |
| PointNet++ | 63.12 | 67.22 | 70.24 | 65.79 |
| DGCNN | 72.95 | 78.87 | 83.44 | 81.19 |
| PointConv | 73.17 | 75.60 | 81.18 | 76.55 |
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Add Cluster↑ | Add Object↑ | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|---|---|
| 无防御 | 88.82 | 0.00 | 0.00 | 0.49 | 0.76 | 65.80 | 47.65 | 5.02 |
| DUP-Net | 85.57 | 65.18 | 63.40 | 78.13 | 72.60 | 67.26 | 50.56 | 26.04 |
| IF-Defense | 84.78 | 81.25 | 72.29 | 76.00 | 63.92 | 82.25 | 73.29 | 45.56 |
| FoldingNet | 82.80 | 75.29 | 74.53 | 66.89 | 57.07 | 74.68 | 65.52 | 39.60 |
| 重构器(Ours) | 86.42 | 81.60 | 79.37 | 77.78 | 74.04 | 82.45 | 74.30 | 50.67 |
| 检测器-重构器(Ours) | 88.78 | 81.60 | 79.37 | 77.78 | 74.56 | 83.12 | 75.02 | 50.67 |
Table 3 Classification accuracy of different defense methods on PointNet[14] (%)
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Add Cluster↑ | Add Object↑ | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|---|---|
| 无防御 | 88.82 | 0.00 | 0.00 | 0.49 | 0.76 | 65.80 | 47.65 | 5.02 |
| DUP-Net | 85.57 | 65.18 | 63.40 | 78.13 | 72.60 | 67.26 | 50.56 | 26.04 |
| IF-Defense | 84.78 | 81.25 | 72.29 | 76.00 | 63.92 | 82.25 | 73.29 | 45.56 |
| FoldingNet | 82.80 | 75.29 | 74.53 | 66.89 | 57.07 | 74.68 | 65.52 | 39.60 |
| 重构器(Ours) | 86.42 | 81.60 | 79.37 | 77.78 | 74.04 | 82.45 | 74.30 | 50.67 |
| 检测器-重构器(Ours) | 88.78 | 81.60 | 79.37 | 77.78 | 74.56 | 83.12 | 75.02 | 50.67 |
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Add Cluster↑ | Add Object↑ | Drop-100↑ | Drop-200↑ |
|---|---|---|---|---|---|---|---|
| 无防御 | 90.19 | 0.58 | 0.53 | 2.53 | 71.68 | 65.73 | 23.62 |
| DUP-Net | 65.57 | 53.91 | 43.36 | 64.71 | 74.55 | 68.23 | 25.56 |
| IF-Defense | 86.25 | 78.26 | 73.07 | 72.29 | 73.42 | 71.25 | 42.09 |
| FoldingNet | 70.19 | 74.13 | 66.22 | 63.73 | 69.33 | 66.94 | 28.76 |
| 重构器(Ours) | 83.44 | 77.56 | 79.33 | 64.58 | 75.43 | 73.44 | 39.72 |
| 检测器-重构器(Ours) | 87.94 | 76.00 | 79.64 | 64.76 | 75.43 | 73.44 | 39.72 |
Table 4 Classification accuracy of different defense methods on DGCNN[16] (%)
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Add Cluster↑ | Add Object↑ | Drop-100↑ | Drop-200↑ |
|---|---|---|---|---|---|---|---|
| 无防御 | 90.19 | 0.58 | 0.53 | 2.53 | 71.68 | 65.73 | 23.62 |
| DUP-Net | 65.57 | 53.91 | 43.36 | 64.71 | 74.55 | 68.23 | 25.56 |
| IF-Defense | 86.25 | 78.26 | 73.07 | 72.29 | 73.42 | 71.25 | 42.09 |
| FoldingNet | 70.19 | 74.13 | 66.22 | 63.73 | 69.33 | 66.94 | 28.76 |
| 重构器(Ours) | 83.44 | 77.56 | 79.33 | 64.58 | 75.43 | 73.44 | 39.72 |
| 检测器-重构器(Ours) | 87.94 | 76.00 | 79.64 | 64.76 | 75.43 | 73.44 | 39.72 |
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|
| 无防御 | 89.25 | 9.26 | 6.83 | 75.62 | 68.32 | 18.73 |
| DUP-Net | 83.26 | 80.30 | 75.22 | 76.73 | 72.36 | 23.76 |
| IF-Defense | 88.37 | 79.14 | 73.69 | 78.38 | 77.39 | 25.68 |
| FoldingNet | 59.55 | 62.75 | 59.23 | 59.30 | 57.36 | 21.25 |
| 重构器(Ours) | 70.24 | 77.86 | 74.33 | 78.94 | 69.93 | 26.73 |
| 检测器-重构器(Ours) | 72.26 | 78.34 | 75.62 | 79.22 | 72.00 | 26.78 |
Table 5 Classification accuracy of different defense methods on PointNet++[15] (%)
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD↑ | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|
| 无防御 | 89.25 | 9.26 | 6.83 | 75.62 | 68.32 | 18.73 |
| DUP-Net | 83.26 | 80.30 | 75.22 | 76.73 | 72.36 | 23.76 |
| IF-Defense | 88.37 | 79.14 | 73.69 | 78.38 | 77.39 | 25.68 |
| FoldingNet | 59.55 | 62.75 | 59.23 | 59.30 | 57.36 | 21.25 |
| 重构器(Ours) | 70.24 | 77.86 | 74.33 | 78.94 | 69.93 | 26.73 |
| 检测器-重构器(Ours) | 72.26 | 78.34 | 75.62 | 79.22 | 72.00 | 26.78 |
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|
| 无防御 | 89.02 | 0.74 | 0.82 | 76.02 | 71.73 | 10.48 |
| DUP-Net | 87.80 | 72.03 | 68.74 | 73.18 | 63.43 | 11.89 |
| IF-Defense | 85.20 | 78.45 | 76.30 | 78.11 | 73.27 | 13.44 |
| FoldingNet | 78.81 | 72.25 | 73.78 | 75.04 | 73.54 | 19.32 |
| 重构器(Ours) | 81.18 | 74.51 | 75.67 | 78.76 | 75.06 | 27.60 |
| 检测器-重构器(Ours) | 83.81 | 75.12 | 77.44 | 80.00 | 76.10 | 29.56 |
Table 6 Classification accuracy of different defense methods on PointConv[17] (%)
| 防御方法 | 正常样本↑ | Add-CD↑ | Add-HD | Drop-100↑ | Drop-200↑ | LG-GAN↑ |
|---|---|---|---|---|---|---|
| 无防御 | 89.02 | 0.74 | 0.82 | 76.02 | 71.73 | 10.48 |
| DUP-Net | 87.80 | 72.03 | 68.74 | 73.18 | 63.43 | 11.89 |
| IF-Defense | 85.20 | 78.45 | 76.30 | 78.11 | 73.27 | 13.44 |
| FoldingNet | 78.81 | 72.25 | 73.78 | 75.04 | 73.54 | 19.32 |
| 重构器(Ours) | 81.18 | 74.51 | 75.67 | 78.76 | 75.06 | 27.60 |
| 检测器-重构器(Ours) | 83.81 | 75.12 | 77.44 | 80.00 | 76.10 | 29.56 |
Fig. 8 Comparison of normal and adversarial examples in the reformer before and after reconstruction ((a) Normal samples; (b) Add-CD; (c) Add-CD (FoldingNet); (d) Add-CD (Ours); (e) Drop-200; (f) Drop-200 (FoldingNet); (g) Drop-200 (Ours); (h) LG-GAN; (i) LG-GAN (FoldingNet); (j) LG-GAN (Ours))
| 重构方法 | 正常样本↓ | Add-CD↓ | Add-HD↓ | Add Cluster↓ | Add Object↓ | Drop-100↓ | Drop-200↓ | LG-GAN↓ |
|---|---|---|---|---|---|---|---|---|
| FoldingNet | 3.44 | 3.49 | 3.59 | 4.7 | 5.07 | 3.52 | 3.65 | 8.82 |
| Ours | 2.83 | 3.17 | 3.41 | 3.25 | 3.73 | 2.57 | 3.26 | 7.28 |
Table 7 Comparison of the error between the examples after the reformer in our paper and the FoldingNet reconstructed and normal example (×10-3)
| 重构方法 | 正常样本↓ | Add-CD↓ | Add-HD↓ | Add Cluster↓ | Add Object↓ | Drop-100↓ | Drop-200↓ | LG-GAN↓ |
|---|---|---|---|---|---|---|---|---|
| FoldingNet | 3.44 | 3.49 | 3.59 | 4.7 | 5.07 | 3.52 | 3.65 | 8.82 |
| Ours | 2.83 | 3.17 | 3.41 | 3.25 | 3.73 | 2.57 | 3.26 | 7.28 |
| [1] |
KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
DOI URL |
| [2] | CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]// 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2014: 1724-1734. |
| [3] |
SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489.
DOI |
| [4] |
REN K, ZHENG T H, ZHAN Q, et al. Adversarial attacks and defenses in deep learning[J]. Engineering, 2020, 6(3): 346-360.
DOI URL |
| [5] | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. (2014-12-20) [2022-10-20]. https://arxiv.org/abs/1412.6572. |
| [6] | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy. New York: IEEE Press, 2017: 39-57. |
| [7] | XIAO C W, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[C]// The 27th International Joint Conference on Artificial Intelligence. New York: ACM, 2018: 3905-3911. |
| [8] | XIE C H, WU Y X, MAATEN L V D, et al. Feature denoising for improving adversarial robustness[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 501-509. |
| [9] | XU W L, EVANS D, QI Y J. Feature squeezing: detecting adversarial examples in deep neural networks[EB/OL]. (2017-12-05) [2022-10-20]. https://arxiv.org/abs/1704.01155. |
| [10] | XIE C H, WANG J Y, ZHANG Z S, et al. Mitigating adversarial effects through randomization[EB/OL]. (2018-02-28) [2022-10-20]. https://arxiv.org/abs/1711.01991. |
| [11] | ZHOU Y, TUZEL O. VoxelNet: end-to-end learning for point cloud based 3D object detection[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 4490-4499. |
| [12] | LIU X Y, JONSCHKOWSKI R, ANGELOVA A, et al. KeyPose: multi-view 3D labeling and keypoint estimation for transparent objects[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 11599-11607. |
| [13] | SOCCINI A M. Gaze estimation based on head movements in virtual reality applications using deep learning[C]// 2017 IEEE Virtual Reality. New York: IEEE Press, 2017: 413-414. |
| [14] | CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 77-85. |
| [15] | QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]// The 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 5105-5114. |
| [16] | WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 146. |
| [17] | WU W X, QI Z A, LI F X. PointConv: deep convolutional networks on 3D point clouds[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 9613-9622. |
| [18] | XIANG C, QI C R, LI B. Generating 3D adversarial point clouds[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 9128-9136. |
| [19] | LIU D, YU R, SU H. Extending adversarial attacks and defenses to deep 3D point cloud classifiers[C]// 2019 IEEE International Conference on Image Processing. New York: IEEE Press, 2019: 2279-2283. |
| [20] | ZHOU H, CHEN K J, ZHANG W M, et al. DUP-net: denoiser and upsampler network for 3D adversarial point clouds defense[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 1961-1970. |
| [21] | ZHENG T H, CHEN C Y, YUAN J S, et al. PointCloud saliency maps[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2020: 1598-1606. |
| [22] | HAMDI A, ROJAS S, THABET A, et al. AdvPC: transferable adversarial perturbations on 3D point clouds[M]//Computer Vision - ECCV 2020. Cham: Springer International Publishing, 2020: 241-257. |
| [23] | ZHOU H, CHEN D D, LIAO J, et al. LG-GAN: label guided adversarial network for flexible targeted attack of point cloud based deep networks[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 10353-10362. |
| [24] | HUANG Q D, DONG X Y, CHEN D D, et al. Shape-invariant 3D adversarial point clouds[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 15314-15323. |
| [25] | TANG K K, SHI Y W, WU J P, et al. NormalAttack: curvature-aware shape deformation along normals for imperceptible point cloud attack[J]. Security and Communication Networks, 2022, 2022: 1-11. |
| [26] | 唐静, 彭伟龙, 唐可可, 等. 基于多视图网络三维形状检索的通用扰动攻击[J]. 图学学报, 2022, 43(1): 93-100. |
| TANG J, PENG W L, TANG K K, et al. MvUPA: universal perturbation attack against 3D shape retrieval based on multi-view networks[J]. Journal of Graphics, 2022, 43(1): 93-100. (in Chinese) | |
| [27] | ZHANG Y, LIANG G B, SALEM T, et al. Defense-PointNet: protecting PointNet against adversarial attacks[C]// 2019 IEEE International Conference on Big Data. New York: IEEE Press, 2020: 5654-5660. |
| [28] | YU L Q, LI X Z, FU C W, et al. PU-net: point cloud upsampling network[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 2790-2799. |
| [29] | WU Z Y, DUAN Y Q, WANG H, et al. IF-defense: 3D adversarial point cloud defense via implicit function based restoration[EB/OL]. (2021-03-18) [2022-10-20]. https://arxiv.org/abs/2010.05272. |
| [30] | LIU H B, JIA J Y, GONG N Z. PointGuard: provably robust 3D point cloud classification[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 6182-6191. |
| [31] | YANG Y Q, FENG C, SHEN Y R, et al. FoldingNet: point cloud auto-encoder via deep grid deformation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 206-215. |
| [32] | GROUEIX T, FISHER M, KIM V G, et al. A papier-Mache approach to learning 3D surface generation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2018: 216-224. |
| [33] | PANG J H, LI D S, TIAN D. TearingNet: point cloud autoencoder to learn topology-friendly representations[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 7449-7458. |
| [34] |
RUBNER Y, TOMASI C, GUIBAS L J. The earth mover's distance as a metric for image retrieval[J]. International Journal of Computer Vision, 2000, 40(2): 99-121.
DOI URL |
| [35] | KINGMA D P, WELLING M. Auto-encoding variational Bayes[EB/OL]. (2014-05-01) [2022-10-20]. https://arxiv.org/abs/1312.6114. |
| [36] | 杨韶晟. 基于VAE的条件生成式对抗网络模型研究[D]. 长春: 吉林大学, 2018. |
| YANG S S. Research on conditional generative adversarial networks model based on VAE[D]. Changchun: Jilin University, 2018. (in Chinese) | |
| [37] |
翟正利, 梁振明, 周炜, 等. 变分自编码器模型综述[J]. 计算机工程与应用, 2019, 55(3): 1-9.
DOI |
|
ZHAI Z L, LIANG Z M, ZHOU W, et al. Research overview of variational auto-encoders models[J]. Computer Engineering and Applications, 2019, 55(3): 1-9. (in Chinese)
DOI |
|
| [38] | WU Z R, SONG S R, KHOSLA A, et al. 3D ShapeNets: a deep representation for volumetric shapes[C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2015: 1912-1920. |
| [1] |
LIANG AO, LI Zhi-han, HUA Hai-yang.
PointMLP-FD: a point cloud classification model based on
multi-level adaptive downsampling
[J]. Journal of Graphics, 2023, 44(1): 112-119.
|
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||