深度神经网络的自适应联合压缩方法
DOI:
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP183 TH89

基金项目:


Adaptive joint compression method for deep neural networks
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    现有模式单一且固定的深度神经网络压缩方法受限于精度损失,而难以对模型进行充分压缩,致使压缩后模型在实际 部署时仍需消耗大量成本高昂且容量有限的存储资源,对其在边缘端的实际应用造成严峻挑战。 针对该问题,本文提出一种可 同时对模型连接结构和权重位宽进行自适应联合优化的压缩方法。 与已有组合式压缩不同,本文充分融合稀疏化和量化方法 进行联合压缩训练,从而全面降低模型规模;采用层级自适应的稀疏度和数据表征位宽,缓解因固定压缩比导致的精度次优化 问题。 通过使用本文提出方法对 VGG、ResNet 和 MobileNet 在 CIFAR-10 数据集上的实验表明,精度损失分别为 1. 3% 、2. 4% 和 0. 9% 时,参数压缩率达到了 143. 0×、151. 6×和 19. 7×;与 12 种典型压缩方法相比,模型存储资源的消耗降低了 15. 3× ~ 148. 5×。 此外,在自建的遥感图像数据集上,该方法仍能在达到最高 284. 2×压缩率的同时保证精度损失不超过 1. 2% 。

    Abstract:

    Deep neural network compression methods with a single and fixed pattern are difficult to compress the model sufficiently due to the limitation of accuracy loss. As a result, the compressed model still needs to consume costly and limited storage resources when it is deployed, which is a significant barrier to its use in edge devices. To address this problem, this article proposes an adaptive joint compression method, which optimizes model structure and weight bit-width in parallel. Compared with the majority of existing combined compression methods, adequate fusion of sparsity and quantization methods is performed for joint compression training to reduce model parameter redundancy comprehensively. Meanwhile, the layer-wise adaptive sparse ratio and weight bit-width are designed to solve the sub-optimization problem of model accuracy and improve model accuracy loss due to the fixed compression ratio. Experimental results of VGG, ResNet, and MobileNet using the CIFAR-10 dataset show that the proposed method achieves 143. 0 ×, 151. 6 ×, and 19. 7 × parameter compression ratios. The corresponding accuracy loss values are 1. 3% , 2. 4% , and 0. 9% , respectively. In addition, compared with 12 typical compression methods, the proposed method reduces the consumption of hardware memory resources by 15. 3×~ 148. 5×. In addition, the proposed method achieves maximum compression ratio of 284. 2× whilemaintaining accuracy loss within limited range of 1. 2% on the self-built remote sensing optical image dataset.

    参考文献
    相似文献
    引证文献
引用本文

姚博文,彭喜元,于希明,刘连胜,彭 宇.深度神经网络的自适应联合压缩方法[J].仪器仪表学报,2023,44(5):21-32

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-08-17
  • 出版日期:
文章二维码