Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊
Journal of Tsinghua University(Science and Technology)    2016, Vol. 56 Issue (7) : 772-776     DOI: 10.16511/j.cnki.qhdxxb.2016.21.043
COMPUTER SCIENCE AND TECHNOLOGY |
SVD-based DNN pruning and retraining
XING Anhao, ZHANG Pengyuan, PAN Jielin, YAN Yonghong
Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
Download: PDF(989 KB)  
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  Deep neural networks (DNN) have many parameters, which restricts the use of DNN in scenarios with limited computing resources or when speed is a priority. Some researchers have proposed to prune the DNN using singular value decomposition (SVD). However, this method lacks adaptivity as it prunes the same number of singular values in all the hidden DNN layers. A singular rate pruning factor (SRPF) based DNN pruning method is given here. This method first separately calculates the SRPFs for each hidden layer based on the data with every layer then pruned using different pruning factors. This method makes full use of the distribution traits of the singular values in each hidden layer. This method is more adaptive than pruning a fixed portion of singular values with experiments showing that a DNN pruned with this method performs better. A retraining method is also given which adapts to the pruned DNN.
Keywords speech recognition      deep neural network (DNN)      singular value decomposition (SVD)     
ZTFLH:  TN912.34  
Issue Date: 22 July 2016
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
XING Anhao
ZHANG Pengyuan
PAN Jielin
YAN Yonghong
Cite this article:   
XING Anhao,ZHANG Pengyuan,PAN Jielin, et al. SVD-based DNN pruning and retraining[J]. Journal of Tsinghua University(Science and Technology), 2016, 56(7): 772-776.
URL:  
http://jst.tsinghuajournals.com/EN/10.16511/j.cnki.qhdxxb.2016.21.043     OR     http://jst.tsinghuajournals.com/EN/Y2016/V56/I7/772
  
  
  
  
[1] Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition[J]. IEEE Signal Processing Magazine, 2012, 29(6):82-97.
[2] Mohamed A, Dahl G, Hinton G. Acoustic modeling using deep belief networks[J]. IEEE Trans. on Audio, Speech, and Language Processing, 2012, 20(1):14-22.
[3] Deng L, Yu D, Platt J. Scalable stacking and learning for building deep architectures[C]//ICASSP. Kyoto, Japan:IEEE Press, 2012:2133-2136.
[4] 张宇, 计哲, 万辛, 等. 基于DNN的声学模型自适应实验研究[J].天津大学学报:自然科学与工程技术版, 2015, 48(9):765-769.ZHANG Yu, JI Zhe, WAN Xin, et al. Adaptation of Deep Neural Network for Large Vocabulary Continuous Speech Recognition[J]. Journal of Tianjin University (Sci and Tech), 2015, 48(9):765-769. (in Chinese)
[5] Liu C, Zhang Z, Wang D. Pruning Deep Neural Networks by Optimal Brain Damage[C]//Proc Interspeech. Singapore, 2014.
[6] LeCun Y, Denker J, Solla S, et al. Optimal brain damage[J]. Advances in Neural Information Processing Systems (NIPS), 1989, 2:598-605.
[7] Li J, Zhao R, Huang J, et al. Learning Small-Size DNN with Output-Distribution-Based Criteria[C]//Proc Interspeech. Singapore, 2014.
[8] Xue J, Li J, Gong Y. Restructuring of deep neural network acoustic models with singular value decomposition[C]//Proc Interspeech. Lyon, France, 2013.
[9] Shlens J. A Tutorial on Principal Component Analysis[J]. Eprint Arxiv, 2014, 58(3):219-226.
[10] Hecht-Nielsen R. Theory of the backpropagation neural network[J]. Neural Networks, 1988, 1(1):65-93.
[1] LI Haifeng, FANG Chunying, MA Lin, ZHANG Mancai, SUN Jiayin. S transform feature for pathological speech[J]. Journal of Tsinghua University(Science and Technology), 2016, 56(7): 765-771.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd