Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们 横山亮次奖 百年刊庆
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  横山亮次奖  |  百年刊庆
清华大学学报(自然科学版)  2023, Vol. 63 Issue (9): 1309-1316    DOI: 10.16511/j.cnki.qhdxxb.2023.21.010
  大数据 本期目录 | 过刊浏览 | 高级检索 |
国防科技领域两阶段开放信息抽取方法
胡明昊, 王芳, 徐先涛, 罗威, 刘晓鹏, 罗准辰, 谭玉珊
军事科学院军事科学信息研究中心, 北京 100142
Two-stage open information extraction method for the defence technology field
HU Minghao, WANG Fang, XU Xiantao, LUO Wei, LIU Xiaopeng, LUO Zhunchen, Tan Yushan
Information Research Center of Military Science, PLA Academy of Military Science, Beijing 100142, China
全文: PDF(1224 KB)  
输出: BibTeX | EndNote (RIS)      
摘要 互联网开源渠道蕴含大量国防科技信息资源, 是获取高价值军事情报的重要数据来源。 国防科技领域开放信息抽取(open information extraction, OpenIE)旨在从海量信息资源中进行主谓宾-宾补(SAO-C)结构元组抽取, 其对于国防科技领域本体归纳、 知识图谱构建等具有重要意义。 然而, 相比其他领域的信息抽取, 国防科技领域开放信息抽取面临元组重叠嵌套、 实体跨度长且难识别、 领域标注数据缺乏等问题。 本文提出一种国防科技领域两阶段开放信息抽取方法, 首先利用基于预训练语言模型的序列标注算法抽取谓语, 然后引入多头注意力机制来学习预测要素边界。 结合领域专家知识, 利用基于实体边界的标注策略构建了国防科技领域标注数据集, 并在该数据集上进行了实验, 结果显示该方法的F1值在两阶段上比长短期记忆结合条件随机场(LSTM+CRF)方法分别提高了3.92%和16.67百分点。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
胡明昊
王芳
徐先涛
罗威
刘晓鹏
罗准辰
谭玉珊
关键词 国防科技开放信息抽取主谓宾-宾补结构知识图谱预训练语言模型    
Abstract:[Objective] The abundant information resources available on the internet about defense technology are of vital importance as data sources for obtaining high-value military intelligence. The aim of open information extraction in the field of defense technology is to extract structured triplets containing subject, predicate, object, and other arguments from the massive amount of information available on the internet. This technology has important implications for ontology induction and the construction of knowledge graphs in the defense technology domain. However, while information extraction experiments in the general domain yield good results, open information extraction in the defense technology domain faces several challenges, such as a lack of domain annotated data, arguments overlapping unadaptability, and unrecognizable long entities.[Methods] In this paper, an annotation strategy is proposed based on the entity boundaries, and an annotated dataset in the defense technology field combined with the experience of domain experts was constructed. Furthermore, a two-stage open information extraction method is proposed in the defense technology field that utilizes a pretrained language model-based sequence labeling algorithm to extract predicates and a multihead attention mechanism to learn the prediction of argument boundaries. In the first stage, the input sentence was converted into an input sequence <[CLS], input sentence[SEP]>, and the input sequence was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. Based on this sentence representation, a conditional random field (CRF) layer was used to predict the position of the predicates, i.e., to predict the BIO labels of the words. In the second stage, the predicated predicates from the first stage were concatenated with the original sentence and converted into an input sequence <[CLS], predicate[SEP], and input sentence[SEP]>, which was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. This representation was then fed to a multihead pointer network to predict the position of the argument. The predicted position was tagged with the actual position to calculate the cross-entropy loss function. Finally, the predicates and the arguments predicted by the predicate and argument extraction models were combined to obtain the complete triplet.[Results] The experimental results from the extensive experiments conducted on a self-built annotated dataset in the defense technology field reveal the following. (1) In predicate extraction, our method achieved a 3.92% performance improvement in the F1 value as compared to LSTM methods and more than 10% performance improvement as compared to syntactic analysis methods. (2) In argument extraction, our method achieved a considerable performance improvement of more than 16% in the F1 value as compared to LSTM methods and about 11% in the F1 value as compared to the BERT+CRF method.[Conclusions] The proposed two-stage open information extraction method can overcome the challenge of arguments overlapping unadaptability and the difficulty of long-span entity extraction, thus improving the shortcomings of existing open information extraction methods. Extensive experimental analysis conducted on the self-built annotated dataset proved the effectiveness of the proposed method.
Key wordsdefense technology    open information extraction    subject-verb-object complement    knowledge graph    pretrained language model
收稿日期: 2023-03-20      出版日期: 2023-08-19
基金资助:国家自然科学基金青年项目(62006243)
作者简介: 胡明昊(1990-),男,高级工程师,E-mail:huminghao16@gmail.com
引用本文:   
胡明昊, 王芳, 徐先涛, 罗威, 刘晓鹏, 罗准辰, 谭玉珊. 国防科技领域两阶段开放信息抽取方法[J]. 清华大学学报(自然科学版), 2023, 63(9): 1309-1316.
HU Minghao, WANG Fang, XU Xiantao, LUO Wei, LIU Xiaopeng, LUO Zhunchen, Tan Yushan. Two-stage open information extraction method for the defence technology field. Journal of Tsinghua University(Science and Technology), 2023, 63(9): 1309-1316.
链接本文:  
http://jst.tsinghuajournals.com/CN/10.16511/j.cnki.qhdxxb.2023.21.010  或          http://jst.tsinghuajournals.com/CN/Y2023/V63/I9/1309
  
  
  
  
  
  
  
  
[1] ETZIONI O, BANKO M, SODERLAND S, et al. Open information extraction from the web[J]. Communications of the ACM, 2008, 51(12): 68-74.
[2] MAUSAM M. Open information extraction systems and downstream applications[C]// Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. New York, USA: AAAI Press, 2016: 4074-4077.
[3] GUO Z J, ZHANG Y, LU W. Attention guided graph convolutional networks for relation extraction[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019: 241-251.
[4] ZHAO S, HU M H, CAI Z P, et al. Modeling dense cross-modal interactions for joint entity-relation extraction[C]// Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. Yokohama, Japan: International Joint Conferences on Artificial Intelligence, 2021: 4032-4038.
[5] STANOVSKY G, MICHAEL J, ZETTLEMOYER L, et al. Supervised open information extraction[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long Papers). New Orleans, Louisiana: Association for Computational Linguistics, 2018.
[6] PAL H M. Demonyms and compound relational nouns in nominal open IE[C]// Proceedings of the 5th Workshop on Automated Knowledge Base Construction. San Diego, USA: Association for Computational Linguistics, 2016: 35-39.
[7] FAN A, GARDENT C, BRAUD C, et al. Using local knowledge graph construction to scale seq2seq models to multi-document inputs[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019: 4186-4196.
[8] BALASUBRAMANIAN N, SODERLAND S, MAUSAM, et al. Generating coherent event schemas at scale[C]// Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, USA: Association for Computational Linguistics, 2013.
[9] STANOVSKY G, DAGAN I, MAUSAM. Open IE as an intermediate structure for semantic tasks[C]// Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Beijing, China: Association for Computational Linguistics, 2015: 303-308.
[10] DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, 2019: 4171-4186.
[11] ROY A, PARK Y, LEE T, et al. Supervising unsupervised open information extraction models[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Hong Kong, China: Association for Computational Linguistics, 2019.
[12] RO Y, LEE Y, KANG P. Multi2OIE: Multilingual open information extraction based on multi-head attention with BERT[C]// Findings of the Association for Computational Linguistics: EMNLP 2020. Online: Association for Computational Linguistics, 2020: 1107-1117.
[13] KOLLURU K, ADLAKHA V, AGGARWAL S, et al. OpenIE6: Iterative grid labeling and coordination analysis for open information extraction[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics, 2020: 3748-3761.
[14] CUI L, WEI F, ZHOU M. Neural open information extraction[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Melbourne, Australia: Association for Computational Linguistics, 2018.
[15] KOLLURU K, AGGARWAL S, RATHORE V, et al. IMoJIE: Iterative memory-based joint open information extraction[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 5871-5886.
[16] SUN M M, LI X, WANG X, et al. Logician: A unified end-to-end neural approach for open-domain information extraction[C]// Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. Marina Del Rey, USA: Association for Computing Machinery, 2018: 556-564.
[17] SEO M J, KEMBHAVI A, FARHADI A, et al. Bidirectional attention flow for machine comprehension[C]// 5th International Conference on Learning Representations. Toulon, France: OpenReview.net, 2017.
[18] RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100, 000+ questions for machine comprehension of text[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, 2016.
[19] LEE K, SALANT S, KWIATKOWSKI T, et al. Learning recurrent span representations for extractive question answering[Z]. arXiv preprint arXiv: 1611.01436, 2016.
[20] LI X Y, YIN F, SUN Z J, et al. Entity-relation extraction as multi-turn question answering[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019: 1340-1350.
[21] LI X Y, FENG J R, MENG Y X, et al. A unified MRC framework for named entity recognition[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, 2020: 5849-5859.
[22] ZHAN J L, ZHAO H. Span model for open information extraction on accurate corpus[C]// Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI Press, 2020: 9523-9530.
[23] STANOVSKY G, DAGAN I. Creating a large benchmark for open information extraction[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, 2016: 2300-2305.
[24] BHARDWAJ S, AGGARWAL S, MAUSAM M. CaRB: A crowdsourced benchmark for open IE[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, 2019: 6262-6267.
[1] 陈传刚, 胡瑾秋, 韩子从, 陈怡玥, 肖尚蕊. 恶劣环境条件下海外天然气管道站场事故演化知识图谱建模及预警方法[J]. 清华大学学报(自然科学版), 2022, 62(6): 1081-1087.
[2] 王屹超, 朱慕华, 许晨, 张琰, 王会珍, 朱靖波. 利用图像描述与知识图谱增强表示的视觉问答[J]. 清华大学学报(自然科学版), 2022, 62(5): 900-907.
[3] 王立平, 张超, 蔡恩磊, 史慧杰, 王冬. 面向自主工业软件的知识提取和知识库构建方法[J]. 清华大学学报(自然科学版), 2022, 62(5): 978-986.
[4] 罗之皓, 李劲, 岳昆, 毛钰源, 刘琰. 知识图谱的Top-k摘要模式挖掘方法[J]. 清华大学学报(自然科学版), 2019, 59(3): 194-202.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 《清华大学学报(自然科学版)》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn