Keyphrase extraction for legal questions based on a sequence to sequence model
ZENG Daojian1,2, TONG Guowei1,2, DAI Yuan1,2, LI Feng1,2, HAN Bing3, XIE Songxian3
1. School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China; 2. Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha 410114, China; 3. Hunan Date-driven AI Technology Co. Ltd., Changsha 410113, China
Abstract:Traditional keyphrase extraction algorithms cannot extract keyphrases that have not appeared in the text, so they cannot effectively extract keyphrases in short legal texts. This paper presents a sequence-to-sequence (seq2seq) model based on reinforcement learning to extract keyphrases from legal questions. First, the encoder pushes the semantic information of a given legal question text into a dense vector; then, the decoder automatically generates the keyphrases. Since the order of the generated keyphrases does not matter in the keyphrase extraction task, reinforcement learning is used to train the model. This method combines the advantages of reinforcement learning for decision-making and the advantages of the sequence-to-sequence model for long-term memory. Tests on real datasets show that the model provides accurate keyphrase extraction.
曾道建, 童国维, 戴愿, 李峰, 韩冰, 谢松县. 基于序列到序列模型的法律问题关键词抽取[J]. 清华大学学报(自然科学版), 2019, 59(4): 256-261.
ZENG Daojian, TONG Guowei, DAI Yuan, LI Feng, HAN Bing, XIE Songxian. Keyphrase extraction for legal questions based on a sequence to sequence model. Journal of Tsinghua University(Science and Technology), 2019, 59(4): 256-261.
[1] TURNEY P D. Learning algorithms for keyphrase extraction[J]. Information Retrieval Journal, 2002, 2(4):303-336. [2] FRANK E, PAYNTER G W, WITTEN I H, et al. Domain-specific keyphrase extraction[C]//International Joint Conference on Artificial Intelligence. San Francisco,CA:Morgan Kaufmann Publishers,1999, 2:668-673. [3] LIU Z, LI P, ZHENG Y, et al. Clustering to find exemplar terms for keyphrase extraction[C]//Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2009:257-266. [4] MEDELYAN O, FRANK E, WITTEN I H. Human-competitive tagging using automatic keyphrase extraction[C]//Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2009:1318-1327. [5] HASAN K S, NG V. Automatic keyphrase extraction:A survey of the state of the art[C]//Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). Stroudsburg, PA:Association for Computational Linguistics, 2014, 1:1262-1273. [6] WANG M, ZHAO B, HUANG Y. PTR:Phrase-based topical ranking for automatic keyphrase extraction in scientific publications[C]//International Conference on Neural Information Processing. Berlin:Springer, 2016:120-128. [7] BELLAACHIA A, AL-DHELAAN M. Ne-rank:A novel graph-based keyphrase extraction in twitter[C]//Proceedings of the 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology- Volume 1. IEEE Computer Society, NJ:IEEE, 2012:372-379. [8] ZHANG Q, WANG Y, GONG Y, et al. Keyphrase extraction using deep recurrent neural networks on Twitter[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2016:836-845. [9] CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using rnn encoder-decoder for statistical machine translation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2014:1724-1734. [10] VINYALS O, KAISER , KOO T, et al. Grammar as a foreign language[C]//Advances in Neural Information Processing Systems. Cambridge, MA:MIT Press, 2015:2773-2781. [11] RANZATO M, CHOPRA S, AULI M, et al. Sequence level training with recurrent neural networks[J/OL].(2015-11-20). https://arxiv.org/abs/1511.06732. [12] Mihalcea R, Tarau P. Textrank:Bringing order into text[C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2004:404-411. [13] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning[J/OL]. (2013-12-19). https://arxiv.org/abs/1312.5602. [14] SUTTON R S, MCALLESTER D A, SINGH S P, et al. Policy gradient methods for reinforcement learning with function approximation[C]//Advances in Neural Information Processing Systems. Cambridge, MA:MIT Press, 2000:1057-1063. [15] LI J, GALLEY M, BROCKETT C, et al. A diversity-promoting objective function for neural conversation models[J/OL].(2015-10-11). https://arxiv.org/abs/1510.03055. [16] KINGMA D P, BA J. Adam:A method for stochastic optimization[J/OL].(2014-12-22). https://arxiv.org/abs/1412.6980. [17] CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J/OL].(2014-12-11). https://arxiv.org/abs/1412.3555. [18] MENG R, ZHAO S, HAN S, et al. Deep keyphrase generation[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada:Association for Computational Linguistics, 2017:582-592.