欢迎访问《图学学报》 分享到:

图学学报 ›› 2025, Vol. 46 ›› Issue (4): 826-836.DOI: 10.11996/JG.j.2095-302X.2025040826

• 计算机图形学与虚拟现实 • 上一篇    下一篇

基于阻尼激活函数的多维度隐式神经时变体数据压缩

郭荣(), 焦辰玥, 高鑫, 邓佳康, 田校先, 毕重科()   

  1. 天津大学智能与计算学部,天津 300354
  • 收稿日期:2024-10-22 修回日期:2024-12-27 出版日期:2025-08-30 发布日期:2025-08-11
  • 通讯作者:毕重科(1982-),男,教授,博士。主要研究方向为高性能计算和科学可视化等。E-mail:bichongke@tju.edu.cn
  • 第一作者:郭荣(2001-),女,硕士研究生。主要研究方向为科学可视化。E-mail:guorong_sx_sd@hotmail.com
  • 基金资助:
    国家自然科学基金(62172294)

Multi-dimensional implicit neural compression of time-varying volume data based on damping activation functions

GUO Rong(), JIAO Chenyue, GAO Xin, DENG Jiakang, TIAN Xiaoxian, BI Chongke()   

  1. Department of Intelligence and Computing, Tianjin University, Tianjin 300354, China
  • Received:2024-10-22 Revised:2024-12-27 Published:2025-08-30 Online:2025-08-11
  • First author:GUO Rong (2001-), master student. Her main research interest covers scientific visualization. E-mail:guorong_sx_sd@hotmail.com
  • Supported by:
    National Natural Science Foundation of China(62172294)

摘要:

大规模数值仿真的时变体数据在科学研究中具有重要价值,但其巨大的规模对I/O带宽和磁盘存储带来了严峻挑战,影响了数据可视化和分析的效率。传统的有损压缩技术在高压缩比条件下易丢失重要特征,深度学习模型在压缩体数据上虽表现良好,但通常要求在压缩前能访问整个数据集,其存在一定的局限性。隐式神经表示(INRs)凭借其通用性成为压缩大规模体数据的有力工具,针对现有方法采用单个大型多层感知机(MLP)对全局数据进行编码,导致训练和推理过程较为缓慢,且难以准确拟合高频信息。提出了一种基于阻尼函数的均匀分区隐式神经表示方法,用于大规模时变体数据的高效压缩。首先,采用均匀分区方法,以多个MLP分别对局部数据进行拟合。通过均匀化分区大小,实现MLP之间的平衡并行化,从而显著提高了训练和推理的效率。其次,引入阻尼函数作为激活函数,克服频谱偏差问题,无需额外的位置编码即可捕捉高频信息。最后,通过复制相邻单元格信息解决了数据分割带来的边界伪影问题。实验结果表明,该方法在多个时变体数据上实现了更高的压缩效率和更精细的细节还原,展现了优越的压缩性能。

关键词: 隐式神经表示, 体数据压缩, 阻尼函数, 科学可视化, 频谱偏差

Abstract:

Time-varying volume data from large-scale numerical simulations holds significant value in scientific research, but its vast size poses severe challenges to I/O bandwidth and disk storage, hindering the efficiency of data visualization and analysis. Traditional lossy compression techniques tend to lose important features at high compression ratios. While deep learning models have shown good performance in compressing volumetric data, they typically require access to the entire dataset before compression, which presents certain limitations. Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volumetric data due to their versatility. However, existing methods primarily rely on a single large multi layer perceptron (MLP) to encode global data, resulting in slow training and inference, and difficulty in accurately capturing high-frequency information. To address these issues, a uniform partitioning implicit neural representation method based on a damping function was proposed for efficient compression of large-scale time-varying volume data. First, a uniform partitioning strategy was applied, where multiple MLPs were used to fit local data separately. By equalizing partition sizes, balanced parallelization among the MLPs was achieved, significantly improving the efficiency of both training and inference. Second, a damping function was introduced as the activation function to overcome spectral bias, enabling the capture of high-frequency information without the need for extra positional encoding. Finally, adjacent cell information replication was used to solve boundary artifacts from partitioning. Experimental results showed that this method achieved higher compression efficiency and finer detail preservation across several time-varying volume datasets, demonstrating superior compression performance.

Key words: implicit neural representation, volumetric data compression, damping function, scientific visualization, spectral bias

中图分类号: