COMPUTER SCIENCE AND TECHNOLOGY |
|
|
|
|
|
SVD-based DNN pruning and retraining |
XING Anhao, ZHANG Pengyuan, PAN Jielin, YAN Yonghong |
Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China |
|
|
Abstract Deep neural networks (DNN) have many parameters, which restricts the use of DNN in scenarios with limited computing resources or when speed is a priority. Some researchers have proposed to prune the DNN using singular value decomposition (SVD). However, this method lacks adaptivity as it prunes the same number of singular values in all the hidden DNN layers. A singular rate pruning factor (SRPF) based DNN pruning method is given here. This method first separately calculates the SRPFs for each hidden layer based on the data with every layer then pruned using different pruning factors. This method makes full use of the distribution traits of the singular values in each hidden layer. This method is more adaptive than pruning a fixed portion of singular values with experiments showing that a DNN pruned with this method performs better. A retraining method is also given which adapts to the pruned DNN.
|
Keywords
speech recognition
deep neural network (DNN)
singular value decomposition (SVD)
|
|
Issue Date: 15 July 2016
|
|
|
[1] Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition[J]. IEEE Signal Processing Magazine, 2012, 29(6):82-97.
[2] Mohamed A, Dahl G, Hinton G. Acoustic modeling using deep belief networks[J]. IEEE Trans. on Audio, Speech, and Language Processing, 2012, 20(1):14-22.
[3] Deng L, Yu D, Platt J. Scalable stacking and learning for building deep architectures[C]//ICASSP. Kyoto, Japan:IEEE Press, 2012:2133-2136.
[4] 张宇, 计哲, 万辛, 等. 基于DNN的声学模型自适应实验研究[J].天津大学学报:自然科学与工程技术版, 2015, 48(9):765-769.ZHANG Yu, JI Zhe, WAN Xin, et al. Adaptation of Deep Neural Network for Large Vocabulary Continuous Speech Recognition[J]. Journal of Tianjin University (Sci and Tech), 2015, 48(9):765-769. (in Chinese)
[5] Liu C, Zhang Z, Wang D. Pruning Deep Neural Networks by Optimal Brain Damage[C]//Proc Interspeech. Singapore, 2014.
[6] LeCun Y, Denker J, Solla S, et al. Optimal brain damage[J]. Advances in Neural Information Processing Systems (NIPS), 1989, 2:598-605.
[7] Li J, Zhao R, Huang J, et al. Learning Small-Size DNN with Output-Distribution-Based Criteria[C]//Proc Interspeech. Singapore, 2014.
[8] Xue J, Li J, Gong Y. Restructuring of deep neural network acoustic models with singular value decomposition[C]//Proc Interspeech. Lyon, France, 2013.
[9] Shlens J. A Tutorial on Principal Component Analysis[J]. Eprint Arxiv, 2014, 58(3):219-226.
[10] Hecht-Nielsen R. Theory of the backpropagation neural network[J]. Neural Networks, 1988, 1(1):65-93. |
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|