专题:大数据

结合提示学习和Qwen大语言模型的裁判文书摘要方法

  • 李佳沂 ,
  • 黄瑞章 ,
  • 陈艳平 ,
  • 林川 ,
  • 秦永彬
展开
  • 1. 贵州大学 计算机科学与技术学院, 贵阳 550025;
    2. 贵州大学 公共大数据国家重点实验室, 贵阳 550025

收稿日期: 2024-07-01

  网络出版日期: 2024-11-22

基金资助

国家自然科学基金资助项目(62066008);贵州省科学技术基金重点资助项目(黔科合基础〔2020〕1Z055);贵州省科学技术基金重点资助项目(黔科合重大专项字[2024]003)

Method for judicial document summarization by combining prompt learning and Qwen large language models

  • LI Jiayi ,
  • HUANG Ruizhang ,
  • CHEN Yanping ,
  • LIN Chuan ,
  • QIN Yongbin
Expand
  • 1. Text Computing & Cognitive Intelligence Engineering Research Center of National Education Ministry, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
    2. State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China

Received date: 2024-07-01

  Online published: 2024-11-22

摘要

尽管大语言模型在新闻、艺术等领域的文本摘要任务上取得了良好的效果,但由于大语言模型缺乏对司法领域知识的学习,同时难以理解裁判文书的结构特征和逻辑关系,导致生成的裁判文书摘要质量不佳。该文提出结合提示学习和Qwen大语言模型的裁判文书摘要方法,将裁判文书数据作为SFT (supervised fine-tuning)技术对大语言模型微调的输入,增强其法律领域适用性;同时设计融入结构信息与角色指令的提示模板,以优化摘要生成,使其更精准地反映文书结构特征与逻辑关系。实验结果表明,该方法在ROUGE-1、ROUGE-2和ROUGE-L的F1值上比基线模型分别提升了21.44%、28.50%和28.97%,说明大语言模型经裁判文书数据微调并引入结构信息后,在裁判文书摘要任务中展现了卓越的性能与巨大的应用潜力。

本文引用格式

李佳沂 , 黄瑞章 , 陈艳平 , 林川 , 秦永彬 . 结合提示学习和Qwen大语言模型的裁判文书摘要方法[J]. 清华大学学报(自然科学版), 2024 , 64(12) : 2007 -2018 . DOI: 10.16511/j.cnki.qhdxxb.2024.21.028

Abstract

[Objective] The increasing maturity of large language model technology has facilitated its widespread application in downstream tasks across various vertical fields. Large language models have exhibited beneficial performance in text summarization tasks in general fields, such as news and art. However, the highly specific language style in the judicial field and the unique complexity of judicial documents in terms of structure and logic make it difficult for large language models to generate judicial document summaries. This study aims to combine prompt learning with large language models to explore their performance in summarizing judicial documents. Prompt templates containing structural information and judicial documents are used as inputs for fine-tuning large language models. As a result, large language models can generate judicial document summaries that adhere to judicial language styles and the structural and logical complexities of judicial documents. [Methods] This study proposes a judicial document summary method that combines prompt learning and the Qwen large language model. Judicial document data are used as the input for fine-tuning a large language model using supervised fine-tuning technology to enhance its applicability in the judicial field. Simultaneously, prompt templates that incorporate structural information and role instructions are designed to optimize summary generation to more accurately reflect the structural characteristics and logical relationships of documents. According to the characteristics of the pretraining data format of the large language model, the fine-tuning data were constructed in the form of question-answer pairs. [Results] The experimental results show that the proposed method improves the F1 of the baseline model by 21.44%, 28.50%, and 28.97% in ROUGE-1, ROUGE-2, and ROUGE-L, respectively, and exceeds all of the comparison models. The ablation experiment demonstrated that the summary generation method using prompt learning was superior to the method without prompt learning for all indicators, and the performance of summarization generated by the large language model utilizing prompt learning was significantly enhanced. The case demonstration reveals that after prompt learning is used to enhance the perception of structural information in the judgment document by the large language model, the judgment document summary generated by this model can better capture and retain key information in the judgment document. Moreover, the language style of this model is closer to that of a real judgment document summary, which further illustrates the effectiveness of the proposed method. [Conclusions] This study integrates the structural information of a judgment document into the task of generating a judgment document summary using a large language model in the form of prompt templates. Prompt templates containing structural information are used to assist the large language model in summarization generation. Therefore, the model can focus on the key information in the judgment document and capture deeper semantic logical relationships. The results demonstrate that after fine-tuning the large language model with judicial document data and introducing structural information, the model demonstrated excellent performance and great application potential in the judicial document summary task. The proposed method can effectively enhance the capability of a large language model in the field of judicial document summaries.

参考文献

[1] 最高人民法院. 最高人民法院关于人民法院在互联网公布裁判文书的规定(2016修订)[EB/OL]. (2016-10-01)[2024-07-30]. https://pkulaw.com/en_law/e9ea61f2aaa98dfabdfb.html. Supreme People's Court. Provisions of the supreme people's court on the publication of judgments on the internet by the people's courts (2016 revision)[EB/OL]. (2016-10-01)[2024-07-30]. https://pkulaw.com/en_law/e9ea61f2aaa98dfabdfb.html. (in Chinese)
[2] SUN Z X, YU W J, SI Z H, et al. Explainable legal case matching via graph optimal transport[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(6): 2461-2475.
[3] 余帅, 宋玉梅, 秦永彬, 等. 基于审判逻辑步骤的裁判文书摘要生成方法[J]. 计算机工程与应用, 2024, 60(4): 113-121. YU S, SONG Y M, QIN Y B, et al. Method for generating summary of judgment documents based on trial logic steps[J]. Computer Engineering and Applications, 2024, 60(4):113-121.(in Chinese)
[4] 李锦烨, 黄瑞章, 秦永彬, 等. 基于反绎学习的裁判文书量刑情节识别[J]. 计算机应用, 2022, 42(6): 1802-1807. LI J Y, HUANG R Z, QIN Y B, et al. Recognition of sentencing circumstances in adjudication documents based on abductive learning[J]. Journal of Computer Applications, 2022, 42(6):1802-1807. (in Chinese)
[5] WIDYASSARI A P, RUSTAD S, SHIDIK G F, et al. Review of automatic text summarization techniques & methods[J]. Journal of King Saud University-Computer and Information Sciences, 2022, 34(4): 1029-1046.
[6] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020: 1877-1901.
[7] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020: 9459-9474.
[8] LUHN H P. The automatic creation of literature abstracts[J]. IBM Journal of Research and Development, 1958, 2(2): 159-165.
[9] MIHALCEA R, TARAU P. Textrank: Bringing order into text[C]//Proceedings of 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain: ACL, 2004: 404-411.
[10] PAGE L, BRIN S, MOTWANI R, et al. The pagerank citation ranking: Bringing order to the web[R]. Palo Alto: Stanford InfoLab, 1999.
[11] XIAO W, CARENINI G. Extractive summarization of long documents by combining global and local context[C]//Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP). Hong Kong, China: ACL, 2019: 3011-3021.
[12] KIM Y, RUSH A M. Sequence-level knowledge distillation[C]//Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing. Austin, USA: ACL, 2016: 1317-1327.
[13] KIM M, SINGH MOIRANGTHEM D, LEE M. Towards abstraction from extraction: Multiple timescale gated recurrent unit for summarization[C]//Proceedings of the 1st Workshop on Representation Learning for NLP. Berlin, Germany: ACL, 2016: 70-77.
[14] SEE A, LIU P J, MANNING C D. Get to the point: Summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: ACL, 2017: 1073-1083.
[15] FARZINDAR A, LAPALME G. LetSum, an automatic legal text summarizing system[M]//GORDON T F. Legal Knowledge and Information Systems: JURIX 2004: The Sevententh Annual Conference. Berlin, Germany: IOS Press, 2004, 120: 11-18.
[16] HACHEY B, GROVER C. Extractive summarisation of legal texts[J]. Artificial Intelligence and Law, 2007, 14(4): 305-345.
[17] POLSLEY S, JHUNJHUNWALA P, HUANG R H. Casesummarizer: A system for automated summarization of legal texts[C]//Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations. Osaka, Japan: ACL, 2016: 258-262.
[18] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020: 1877-1901.
[19] GOYAL T, LI J J, DURRETT G. News summarization and evaluation in the era of GPT-3[EB/OL]. (2023-05-26)[2024-07-30]. https://arxiv.org/pdf/2209.12356.
[20] ZHANG T Y, LADHAK F, DURMUS E, et al. Benchmarking large language models for news summarization[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 39-57.
[21] YANG X J, LI Y, ZHANG X L, et al. Exploring the limits of ChatGPT for query or aspect-based text summarization[EB/OL]. (2023-02-16)[2024-07-30]. https://arxiv.org/pdf/2302.08081.
[22] DAN J P, HU W X, WANG Y M. Enhancing legal judgment summarization with integrated semantic and structural information[J/OL]. Artificial Intelligence and Law, (2023-11-26)[2024-07-30]. https://link.springer.com/article/10.1007/s10506-023-09381-8.
[23] CHEN B H, ZHANG Z F, LANGRENÉ N, et al. Unleashing the potential of prompt engineering in large language models: A comprehensive review[EB/OL]. (2024-06-18)[2024-07-30]. https://arxiv.org/pdf/2310.14735.
[24] BAI J Z, BAI S, CHU Y F, et al. Qwen technical report[EB/OL]. (2023-09-28)[2024-07-30]. https://arxiv.org/pdf/2309.16609.
[25] LIN C Y. Rouge: A package for automatic evaluation of summaries[C]//Proceedings of the Text Summarization Branches Out. Barcelona, Spain: ACL, 2004: 74-81.
[26] BAI J Z, BAI S, CHU Y F, et al. Qwen technical report[EB/OL]. (2023-09-28)[2024-07-30]. https://arxiv.org/pdf/2309.16609.
[27] ZHU C G, LIU Y, MEI J, et al. MediaSum: A large-scale media interview dataset for dialogue summarization[C]//Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Online: ACL, 2021: 5927-5934.
[28] MIHALCEA R, TARAU P. Textrank: Bringing order into text[C]//Proceedings of 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain: ACL, 2004: 404-411.
[29] LIU Y. Fine-tune BERT for extractive summarization[EB/OL]. (2019-09-05)[2024-07-30]. https://arxiv.org/pdf/1903.10318.
[30] DONG L, YANG N, WANG W H, et al. Unified language model pre-training for natural language understanding and generation[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019: 32.
[31] SEE A, LIU P J, MANNING C D. Get to the point: Summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: ACL, 2017: 1073-1083.
[32] 魏鑫炀, 秦永彬, 唐向红, 等. 融合法条的司法裁判文书摘要生成方法[J]. 计算机工程与设计, 2023, 44(9): 2844-2850. DOI: 10.16208/j.issn1000-7024.2023.09.037. WEI X Y, QIN Y B, TANG X H, et al. Method of abstracting judgement document combined with law[J]. Computer Engineering and Design, 2023, 44(9): 2844-2850. DOI: 10.16208/j.issn1000-7024.2023.09.037. (in Chinese)
[33] DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Minneapolis, USA: ACL, 2019: 4171-4186.
[34] DAN J P, HU W X, WANG Y M. Enhancing legal judgment summarization with integrated semantic and structural information[J/OL]. Artificial Intelligence and Law, (2023-11-26)[2024-07-30]. https://link.springer.com/article/10.1007/s10506-023-09381-8.
[35] YOUNG A, CHEN B, LI C, et al. Yi: Open foundation models by 01.AI[EB/OL]. (2024-03-07)[2024-07-30]. https://arxiv.org/pdf/2403.04652.
[36] DAN J P, HU W X, WANG Y M. Enhancing legal judgment summarization with integrated semantic and structural information[J/OL]. Artificial Intelligence and Law, (2023-11-26)[2024-07-30]. https://link.springer.com/article/10.1007/s10506-023-09381-8.
[37] OpenAI, ACHIAM J, ADLER S, et al. Gpt-4 technical report[EB/OL]. (2024-03-04)[2024-07-30]. https://arxiv.org/pdf/2303.08774.
[38] PU D Q, WANG Y F, DEMBERG V. Incorporating distributions of discourse structure for long document abstractive summarization[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Toronto, Canada: ACL, 2023: 5574-5590.
文章导航

/