图学学报 ›› 2025, Vol. 46 ›› Issue (1): 47-58.DOI: 10.11996/JG.j.2095-302X.2025010047
收稿日期:
2024-07-10
接受日期:
2024-10-11
出版日期:
2025-02-28
发布日期:
2025-02-14
通讯作者:
王夏黎(1965-),男,副教授,博士。主要研究方向为图形图像处理与计算机视觉。E-mail:xlwang@chd.edu.cn第一作者:
张文祥(2001-),男,硕士研究生。主要研究方向为图形图像处理与计算机视觉。E-mail:2495898570@qq.com
基金资助:
ZHANG Wenxiang(), WANG Xiali(
), WANG Xinyi, YANG Zongbao
Received:
2024-07-10
Accepted:
2024-10-11
Published:
2025-02-28
Online:
2025-02-14
Contact:
WANG Xiali (1965-), associate professor, Ph.D. His main research interests cover graphic image processing and computer vision. E-mail:xlwang@chd.edu.cnFirst author:
ZHANG Wenxiang (2001-), master student. His main research interests cover graphic image processing and computer vision. E-mail:2495898570@qq.com
Supported by:
摘要:
深度伪造人脸技术发展迅速并已被广泛应用于各种不良途径,检测被篡改的面部图像和视频也因此成为了一个重要的研究课题。现有的卷积神经网络存在过拟合,泛化性差的问题,在未知的合成人脸数据上表现不佳。针对这一不足,提出一种强化伪造区域关注的深度伪造人脸检测方法。首先,引入注意力机制处理用于分类的特征图,学习到的注意力图可以突出被篡改的面部区域,提高了模型的泛化能力;其次,在骨干网络之后连接了伪造区域检测模块,通过检测多尺度锚框中是否存在伪造痕迹,减少了全局人脸信息的干扰,进一步加强了模型对局部伪造区域的关注;最后,引入一种一致性表示学习框架,通过明确约束同一输入的不同表示之间的一致性,使模型更加关注内在的伪造证据,避免过拟合。在FaceForensics++,Celeb-DF-v2和DFDC等3个数据集上,分别以EfficientNet-b4和Xception作为骨干网络进行实验。结果表明,该方法在数据集内评估时达到了较好的性能,在跨数据集评估时则优于原网络和其他先进的方法。
中图分类号:
张文祥, 王夏黎, 王欣仪, 杨宗宝. 一种强化伪造区域关注的深度伪造人脸检测方法[J]. 图学学报, 2025, 46(1): 47-58.
ZHANG Wenxiang, WANG Xiali, WANG Xinyi, YANG Zongbao. A deepfake face detection method that enhances focus on forgery regions[J]. Journal of Graphics, 2025, 46(1): 47-58.
阶段 | 操作 | 输入 分辨率 | 输出 通道数 | 层数 |
---|---|---|---|---|
1 | Conv 3×3 | 224×224 | 48 | 1 |
2 | MBConv1,k3×3 | 112×112 | 24 | 1 |
3 | MBConv1,k3×3 | 112×112 | 24 | 1 |
4 | MBConv6,k3×3 | 112×112 | 32 | 1 |
5 | MBConv6,k3×3 | 56×56 | 32 | 3 |
6 | MBConv6,k5×5 | 56×56 | 56 | 1 |
7 | MBConv6,k5×5 | 28×28 | 56 | 3 |
8 | MBConv6,k3×3 | 28×28 | 112 | 1 |
9 | MBConv6,k3×3 | 14×14 | 112 | 5 |
10 | MBConv6,k5×5 | 14×14 | 160 | 1 |
11 | MBConv6,k5×5 | 14×14 | 160 | 5 |
12 | MBConv6,k5×5 | 14×14 | 272 | 1 |
13 | MBConv6,k5×5 | 7×7 | 272 | 7 |
14 | MBConv6,k3×3 | 7×7 | 448 | 1 |
15 | MBConv6,k3×3 | 7×7 | 448 | 1 |
16 | Conv 1×1 & Pooling & FC | 7×7 | 1792 | 1 |
表1 EfficientNet-b4网络结构
Table 1 EfficientNet-b4 network architecture
阶段 | 操作 | 输入 分辨率 | 输出 通道数 | 层数 |
---|---|---|---|---|
1 | Conv 3×3 | 224×224 | 48 | 1 |
2 | MBConv1,k3×3 | 112×112 | 24 | 1 |
3 | MBConv1,k3×3 | 112×112 | 24 | 1 |
4 | MBConv6,k3×3 | 112×112 | 32 | 1 |
5 | MBConv6,k3×3 | 56×56 | 32 | 3 |
6 | MBConv6,k5×5 | 56×56 | 56 | 1 |
7 | MBConv6,k5×5 | 28×28 | 56 | 3 |
8 | MBConv6,k3×3 | 28×28 | 112 | 1 |
9 | MBConv6,k3×3 | 14×14 | 112 | 5 |
10 | MBConv6,k5×5 | 14×14 | 160 | 1 |
11 | MBConv6,k5×5 | 14×14 | 160 | 5 |
12 | MBConv6,k5×5 | 14×14 | 272 | 1 |
13 | MBConv6,k5×5 | 7×7 | 272 | 7 |
14 | MBConv6,k3×3 | 7×7 | 448 | 1 |
15 | MBConv6,k3×3 | 7×7 | 448 | 1 |
16 | Conv 1×1 & Pooling & FC | 7×7 | 1792 | 1 |
名称 | 版本 |
---|---|
CPU | 16核,AMD EPYC 9654 |
GPU | NVIDIA GeForce RTX 4090 |
内存 | 60 GB |
操作系统 | Ubuntu 22.04.4 LTS |
GPU加速库 | CUDA 11.8.0,CUDNN 8.9.4 |
环境 | Python 3.10.13,Pytorch 2.1.0 |
表2 实验环境
Table 2 Experimental environment
名称 | 版本 |
---|---|
CPU | 16核,AMD EPYC 9654 |
GPU | NVIDIA GeForce RTX 4090 |
内存 | 60 GB |
操作系统 | Ubuntu 22.04.4 LTS |
GPU加速库 | CUDA 11.8.0,CUDNN 8.9.4 |
环境 | Python 3.10.13,Pytorch 2.1.0 |
网络 | 数据增强 | 测试集AUC/% | AVG | ||
---|---|---|---|---|---|
FF++ | CDF | DFDC | |||
EfficientNet-b4 | 98.10 | 71.77 | 53.32 | 74.40 | |
√ | 99.73 | 84.80 | 70.32 | 84.95 | |
Xception | 99.58 | 50.73 | 45.05 | 65.12 | |
√ | 99.72 | 54.48 | 45.29 | 66.50 |
表3 数据增强对模型性能的影响
Table 3 The influence of data augmentation on model performance
网络 | 数据增强 | 测试集AUC/% | AVG | ||
---|---|---|---|---|---|
FF++ | CDF | DFDC | |||
EfficientNet-b4 | 98.10 | 71.77 | 53.32 | 74.40 | |
√ | 99.73 | 84.80 | 70.32 | 84.95 | |
Xception | 99.58 | 50.73 | 45.05 | 65.12 | |
√ | 99.72 | 54.48 | 45.29 | 66.50 |
骨干 | AbL | FRD | CORE | 测试集AUC/% | AVG | 测试集EER↓/% | AVG | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
FF++ | CDF | DFDC | FF++ | CDF | DFDC | ||||||
EfficientNet-b4 | 99.73 | 84.80 | 70.32 | 84.95 | 0.55 | 28.08 | 40.24 | 22.97 | |||
√ | 99.62 | 90.34 | 73.91 | 87.96 | 1.02 | 24.98 | 34.42 | 20.14 | |||
√ | 99.63 | 90.95 | 70.79 | 87.12 | 0.75 | 25.91 | 37.68 | 21.45 | |||
√ | 99.48 | 91.16 | 73.84 | 88.16 | 1.12 | 23.00 | 35.50 | 19.87 | |||
√ | √ | 99.62 | 91.06 | 72.88 | 87.85 | 0.97 | 23.74 | 36.03 | 20.25 | ||
√ | √ | 99.52 | 92.10 | 75.83 | 89.15 | 1.19 | 23.34 | 34.96 | 19.83 | ||
√ | √ | 99.58 | 92.80 | 73.73 | 88.70 | 1.27 | 23.63 | 34.85 | 19.92 | ||
√ | √ | √ | 99.59 | 93.43 | 75.74 | 89.58 | 1.00 | 21.86 | 34.95 | 19.27 | |
Xception | 99.72 | 54.48 | 45.29 | 66.50 | 0.75 | 46.76 | 53.87 | 33.79 | |||
√ | 99.47 | 88.45 | 72.79 | 86.90 | 2.62 | 26.13 | 36.87 | 21.87 | |||
√ | 99.64 | 85.40 | 57.16 | 80.73 | 1.83 | 29.25 | 42.30 | 24.46 | |||
√ | 99.50 | 88.83 | 72.31 | 86.88 | 2.50 | 26.03 | 36.25 | 21.59 | |||
√ | √ | 99.44 | 88.82 | 72.50 | 86.92 | 2.43 | 26.41 | 36.25 | 21.70 | ||
√ | √ | 99.46 | 90.02 | 75.34 | 88.27 | 1.92 | 25.79 | 35.84 | 21.18 | ||
√ | √ | 99.60 | 89.51 | 73.23 | 87.45 | 2.01 | 26.22 | 36.77 | 21.67 | ||
√ | √ | √ | 99.37 | 90.34 | 75.58 | 88.43 | 1.71 | 26.43 | 35.06 | 21.07 |
表4 不同模块组合对模型性能的影响
Table 4 The influence of varied module compositions on model performance
骨干 | AbL | FRD | CORE | 测试集AUC/% | AVG | 测试集EER↓/% | AVG | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
FF++ | CDF | DFDC | FF++ | CDF | DFDC | ||||||
EfficientNet-b4 | 99.73 | 84.80 | 70.32 | 84.95 | 0.55 | 28.08 | 40.24 | 22.97 | |||
√ | 99.62 | 90.34 | 73.91 | 87.96 | 1.02 | 24.98 | 34.42 | 20.14 | |||
√ | 99.63 | 90.95 | 70.79 | 87.12 | 0.75 | 25.91 | 37.68 | 21.45 | |||
√ | 99.48 | 91.16 | 73.84 | 88.16 | 1.12 | 23.00 | 35.50 | 19.87 | |||
√ | √ | 99.62 | 91.06 | 72.88 | 87.85 | 0.97 | 23.74 | 36.03 | 20.25 | ||
√ | √ | 99.52 | 92.10 | 75.83 | 89.15 | 1.19 | 23.34 | 34.96 | 19.83 | ||
√ | √ | 99.58 | 92.80 | 73.73 | 88.70 | 1.27 | 23.63 | 34.85 | 19.92 | ||
√ | √ | √ | 99.59 | 93.43 | 75.74 | 89.58 | 1.00 | 21.86 | 34.95 | 19.27 | |
Xception | 99.72 | 54.48 | 45.29 | 66.50 | 0.75 | 46.76 | 53.87 | 33.79 | |||
√ | 99.47 | 88.45 | 72.79 | 86.90 | 2.62 | 26.13 | 36.87 | 21.87 | |||
√ | 99.64 | 85.40 | 57.16 | 80.73 | 1.83 | 29.25 | 42.30 | 24.46 | |||
√ | 99.50 | 88.83 | 72.31 | 86.88 | 2.50 | 26.03 | 36.25 | 21.59 | |||
√ | √ | 99.44 | 88.82 | 72.50 | 86.92 | 2.43 | 26.41 | 36.25 | 21.70 | ||
√ | √ | 99.46 | 90.02 | 75.34 | 88.27 | 1.92 | 25.79 | 35.84 | 21.18 | ||
√ | √ | 99.60 | 89.51 | 73.23 | 87.45 | 2.01 | 26.22 | 36.77 | 21.67 | ||
√ | √ | √ | 99.37 | 90.34 | 75.58 | 88.43 | 1.71 | 26.43 | 35.06 | 21.07 |
方法 | 骨干 | 测试集AUC/% | AVG | 测试集EER↓/% | AVG | ||||
---|---|---|---|---|---|---|---|---|---|
FF++ | CDF | DFDC | FF++ | CDF | DFDC | ||||
Xception*[ | Xception | 99.72 | 54.48 | 45.29 | 66.50 | 0.75 | 46.76 | 53.87 | 50.32 |
EfficientNet*[ | EfficientNet-b4 | 99.73 | 84.80 | 70.32 | 84.95 | 0.55 | 28.08 | 40.24 | 34.16 |
ID-unware[ | EfficientNet-b4 | 99.79 | 93.88 | 73.85 | 89.17 | - | - | - | - |
GFF[ | Xception | 98.36 | 75.31 | 71.58 | 81.75 | 3.85 | 32.48 | 34.77 | 33.63 |
ART[ | Xception | 99.89 | 92.77 | 73.82 | 88.83 | - | - | - | - |
SBI[ | EfficientNet-b4 | 99.64 | 93.18 | 72.42 | 88.41 | - | - | - | - |
MAT[ | EfficientNet-b4 | 99.27 | 76.65 | 67.34 | 81.08 | 3.35 | 32.83 | 38.31 | 35.57 |
LTW[ | ResNet-50 | 99.17 | 77.14 | 74.58 | 83.63 | 3.32 | 29.34 | 33.81 | 31.58 |
LipForensics[ | ResNet-18 | 97.10 | 82.40 | 73.50 | 84.33 | - | - | - | - |
FTCN[ | ResNet-50 | 99.70 | 86.90 | 74.00 | 86.87 | - | - | - | - |
SFDG[ | EfficientNet-b4 | 99.53 | 75.83 | 73.64 | 83.00 | - | 30.30 | 33.70 | 32.00 |
PEL[ | EfficientNet-b4 | 99.32 | 75.86 | 63.31 | 79.50 | - | 35.70 | 40.40 | 38.05 |
本文方法 | Xception | 99.37 | 90.34 | 75.58 | 88.43 | 1.71 | 26.43 | 35.06 | 30.75 |
EfficientNet-b4 | 99.59 | 93.43 | 75.74 | 89.58 | 1.00 | 21.86 | 34.95 | 28.41 |
表5 与最先进方法的对比
Table 5 Comparison with state-of-the-art methods
方法 | 骨干 | 测试集AUC/% | AVG | 测试集EER↓/% | AVG | ||||
---|---|---|---|---|---|---|---|---|---|
FF++ | CDF | DFDC | FF++ | CDF | DFDC | ||||
Xception*[ | Xception | 99.72 | 54.48 | 45.29 | 66.50 | 0.75 | 46.76 | 53.87 | 50.32 |
EfficientNet*[ | EfficientNet-b4 | 99.73 | 84.80 | 70.32 | 84.95 | 0.55 | 28.08 | 40.24 | 34.16 |
ID-unware[ | EfficientNet-b4 | 99.79 | 93.88 | 73.85 | 89.17 | - | - | - | - |
GFF[ | Xception | 98.36 | 75.31 | 71.58 | 81.75 | 3.85 | 32.48 | 34.77 | 33.63 |
ART[ | Xception | 99.89 | 92.77 | 73.82 | 88.83 | - | - | - | - |
SBI[ | EfficientNet-b4 | 99.64 | 93.18 | 72.42 | 88.41 | - | - | - | - |
MAT[ | EfficientNet-b4 | 99.27 | 76.65 | 67.34 | 81.08 | 3.35 | 32.83 | 38.31 | 35.57 |
LTW[ | ResNet-50 | 99.17 | 77.14 | 74.58 | 83.63 | 3.32 | 29.34 | 33.81 | 31.58 |
LipForensics[ | ResNet-18 | 97.10 | 82.40 | 73.50 | 84.33 | - | - | - | - |
FTCN[ | ResNet-50 | 99.70 | 86.90 | 74.00 | 86.87 | - | - | - | - |
SFDG[ | EfficientNet-b4 | 99.53 | 75.83 | 73.64 | 83.00 | - | 30.30 | 33.70 | 32.00 |
PEL[ | EfficientNet-b4 | 99.32 | 75.86 | 63.31 | 79.50 | - | 35.70 | 40.40 | 38.05 |
本文方法 | Xception | 99.37 | 90.34 | 75.58 | 88.43 | 1.71 | 26.43 | 35.06 | 30.75 |
EfficientNet-b4 | 99.59 | 93.43 | 75.74 | 89.58 | 1.00 | 21.86 | 34.95 | 28.41 |
图9 本文方法与Xception的Grad-CAM热力图对比((a)真实图像;(b)伪造图像;(c)面部交换图像;(d)伪造区域;(e) Xception热力图;(f)本文方法热力图)
Fig. 9 The comparison of the Grad-CAM heatmaps between the proposed method and Xception ((a) Real image; (b) Forged image; (c) Face-swapped image; (d) Forged region; (e) Xception heatmap; (f) Proposed method heatmap)
[1] | THIES J, ZOLLHÖFER M, STAMMINGER M, et al. Face2Face: real-time face capture and reenactment of RGB videos[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2016: 2387-2395. |
[2] | THIES J, ZOLLHÖFER M, NIEßNER M. Deferred neural rendering: Image synthesis using neural textures[J]. ACM Transactions on Graphics (TOG), 2019, 38(4): 66. |
[3] | YIN J, GAN C, ZHAO K, et al. A novel model for imbalanced data classification[C]// The 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 6680-6687. |
[4] | NGUYEN H H, YAMAGISHI J, ECHIZEN I. Use of a capsule network to detect fake images and videos[EB/OL]. (2019-10-29)[2024-01-06]. https://arxiv.org/abs/1910.12467. |
[5] |
穆大强, 李腾. 基于多模态融合的人脸反欺骗技术[J]. 图学学报, 2020, 41(5): 750-756.
DOI |
MU D Q, LI T. Face anti-spoofing technology based on multi-modal fusion[J]. Journal of Graphics, 2020, 41(5): 750-756 (in Chinese). | |
[6] | DAS S, SEFERBEKOV S, DATTA A, et al. Towards solving the DeepFake problem: an analysis on improving DeepFake detection using dynamic face augmentation[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 3769-3778. |
[7] | WANG C R, DENG W H. Representative forgery mining for fake face detection[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 14918-14927. |
[8] |
王欣雨, 刘慧, 朱积成, 等. 基于高低频特征分解的深度多模态医学图像融合网络[J]. 图学学报, 2024, 45(1): 65-77.
DOI |
WANG X Y, LIU H, ZHU J C, et al. Deep multimodal medical image fusion network based on high-low frequency feature decomposition[J]. Journal of Graphics, 2024, 45(1): 65-77 (in Chinese).
DOI |
|
[9] | LIN X, WANG Z J, MA L Z, et al. Salient object detection based on multiscale segmentation and fuzzy broad learning[J]. The Computer Journal, 2022, 65(4): 1006-1019. |
[10] | ZHAO T C, XU X, XU M Z, et al. Learning self-consistency for deepfake detection[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 15003-15013. |
[11] | WANG Z D, BAO J M, ZHOU W G, et al. AltFreezing for more general video face forgery detection[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 4129-4138. |
[12] | YAN Z Y, LUO Y H, LYU S W, et al. Transcending forgery specificity with latent space augmentation for generalizable DeepFake detection[C]// 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2024: 8984-8994. |
[13] | LE B M, WOO S S. Quality-agnostic deepfake detection with intra-model collaborative learning[C]// 2023 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2023: 22321-22332. |
[14] | CHOLLET F. Xception: deep learning with depthwise separable convolutions[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2017: 1800-1807. |
[15] | TAN M X, LE Q V. EfficientNet: Rethinking model scaling for convolutional neural networks[EB/OL]. [2024-05-09]. https://dblp.uni-trier.de/db/conf/icml/icml2019.html#TanL19. |
[16] | LI Y Z, CHANG M C, LYU S W. In Ictu oculi: Exposing AI created fake videos by detecting eye blinking[C]// 2018 IEEE International Workshop on Information Forensics and Security. New York: IEEE Press, 2018: 1-7. |
[17] | PU J M, MANGAOKAR N, WANG B L, et al. Noisescope: Detecting deepfake images in a blind setting[C]// The 36th Annual Computer Security Applications Conference. New York: ACM, 2020: 913-927. |
[18] | SHAHZAD S A, HASHMI A, KHAN S, et al. Lip sync matters: a novel multimodal forgery detector[C]// 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. New York: IEEE Press, 2022: 1885-1892. |
[19] | VERMA V, LAMB A, BECKHAM C, et al. Manifold Mixup: better representations by interpolating hidden states[EB/OL]. [2024-05-09]. https://dblp.uni-trier.de/db/conf/icml/icml2019.html#VermaLBNMLB19. |
[20] | ZHANG Y C, JIAO R S, LIAO Q C, et al. Uncertainty-guided mutual consistency learning for semi-supervised medical image segmentation[J]. Artificial Intelligence in Medicine, 2023, 138: 102476. |
[21] | NI Y S, MENG D P, YU C Q, et al. CORE: consistent representation learning for face forgery detection[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 12-21. |
[22] | DANG H, LIU F, STEHOUWER J, et al. On the detection of digital face manipulation[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 5780-5789. |
[23] | RÖSSLER A, COZZOLINO D, VERDOLIVA L, et al. FaceForensics++: learning to detect manipulated facial images[C]// 2019 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2019: 1-11. |
[24] | LI Y Z, YANG X, SUN P, et al. Celeb-DF: a large-scale challenging dataset for DeepFake forensics[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2020: 3204-3213. |
[25] | DOLHANSKY B, BITTON J, PFLAUM B, et al. The DeepFake detection challenge (DFDC) dataset[EB/OL]. (2020-10-28) [2024-03-11]. https://arxiv.org/abs/2006.07397. |
[26] | DONG S C, WANG J, JI R H, et al. Implicit identity leakage: The stumbling block to improving deepfake detection generalization[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 3994-4004. |
[27] | LUO Y C, ZHANG Y, YAN J C, et al. Generalizing face forgery detection with high-frequency features[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 16312-16321. |
[28] | BAI W M, LIU Y F, ZHANG Z P, et al. AUNet: learning relations between action units for face forgery detection[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 24709-24719. |
[29] | SHIOHARA K, YAMASAKI T. Detecting deepfakes with self-blended images[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2022: 18699-18708. |
[30] | ZHAO H Q, WEI T Y, ZHOU W B, et al. Multi-attentional deepfake detection[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 2185-2194. |
[31] | SUN K, LIU H, YE Q X, et al. Domain general face forgery detection by learning to weight[C]// The 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 2638-2646. |
[32] | HALIASSOS A, VOUGIOUKAS K, PETRIDIS S, et al. Lips don't lie: a generalisable and robust approach to face forgery detection[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2021: 5037-5047. |
[33] | ZHENG Y L, BAO J M, CHEN D, et al. Exploring temporal coherence for more general video face forgery detection[C]// 2021 IEEE/CVF International Conference on Computer Vision. New York: IEEE Press, 2021: 15024-15034. |
[34] | WANG Y, YU K, CHEN C, et al. Dynamic graph learning with content-guided spatial-frequency relation reasoning for deepfake detection[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE Press, 2023: 7278-7287. |
[35] | GU Q Q, CHEN S, YAO T P, et al. Exploiting fine-grained face forgery clues via progressive enhancement learning[C]// The 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 735-743. |
[36] | SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]// 2017 IEEE International Conference on Computer Vision. New York: IEEE Press, 2017: 618-626. |
[1] | 崔克彬, 耿佳昌. 基于EE-YOLOv8s的多场景火灾迹象检测算法[J]. 图学学报, 2025, 46(1): 13-27. |
[2] | 陈冠豪, 徐丹, 贺康建, 施洪贞, 张浩. 基于转置注意力和CNN的图像超分辨率重建网络[J]. 图学学报, 2025, 46(1): 35-46. |
[3] | 苑朝, 赵明雪, 张丰羿, 冯晓勇, 李冰, 陈瑞. 基于点云特征增强的复杂室内场景3D目标检测[J]. 图学学报, 2025, 46(1): 59-69. |
[4] | 卢洋, 陈林慧, 姜晓恒, 徐明亮. SDENet:基于多尺度注意力质量感知的合成缺陷数据评价网络[J]. 图学学报, 2025, 46(1): 94-103. |
[5] | 胡凤阔, 叶兰, 谭显峰, 张钦展, 胡志新, 方清, 王磊, 满孝锋. 一种基于改进YOLOv8的轻量化路面病害检测算法[J]. 图学学报, 2024, 45(5): 892-900. |
[6] | 刘义艳, 郝婷楠, 贺晨, 常英杰. 基于DBBR-YOLO的光伏电池表面缺陷检测[J]. 图学学报, 2024, 45(5): 913-921. |
[7] | 吴沛宸, 袁立宁, 胡皓, 刘钊, 郭放. 基于注意力特征融合的视频异常行为检测[J]. 图学学报, 2024, 45(5): 922-929. |
[8] | 刘丽, 张起凡, 白宇昂, 黄凯烨. 结合Swin Transformer的多尺度遥感图像变化检测研究[J]. 图学学报, 2024, 45(5): 941-956. |
[9] | 章东平, 魏杨悦, 何数技, 徐云超, 胡海苗, 黄文君. 特征融合与层间传递:一种基于Anchor DETR改进的目标检测方法[J]. 图学学报, 2024, 45(5): 968-978. |
[10] | 谢国波, 林松泽, 林志毅, 吴陈锋, 梁立辉. 基于改进YOLOv7-tiny的道路病害检测算法[J]. 图学学报, 2024, 45(5): 987-997. |
[11] | 熊超, 王云艳, 罗雨浩. 特征对齐与上下文引导的多视图三维重建[J]. 图学学报, 2024, 45(5): 1008-1016. |
[12] | 彭文, 林金炜. 基于空间信息关注和纹理增强的短小染色体分类方法[J]. 图学学报, 2024, 45(5): 1017-1029. |
[13] | 刘宗明, 洪唯, 龙睿, 祝越, 张小宇. 基于自注意机制的乳源瑶绣自动生成与应用研究[J]. 图学学报, 2024, 45(5): 1096-1105. |
[14] | 李大湘, 吉展, 刘颖, 唐垚. 改进YOLOv7遥感图像目标检测算法[J]. 图学学报, 2024, 45(4): 650-658. |
[15] | 魏敏, 姚鑫. 基于多尺度与注意力机制的两阶段风暴单体外推研究[J]. 图学学报, 2024, 45(4): 696-704. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||