Document structure model for survey generation using neural network

Huiyan XU , Zhongqing WANG , Yifei ZHANG , Xiaolan WENG , Zhijian WANG , Guodong ZHOU

Front. Comput. Sci. ›› 2021, Vol. 15 ›› Issue (4) : 154325

PDF (1830KB)
Front. Comput. Sci. ›› 2021, Vol. 15 ›› Issue (4) : 154325 DOI: 10.1007/s11704-020-9366-8
RESEARCH ARTICLE

Document structure model for survey generation using neural network

Author information +
History +
PDF (1830KB)

Abstract

Survey generation aims to generate a summary from a scientific topic based on related papers. The structure of papers deeply influences the generative process of survey, especially the relationships between sentence and sentence, paragraph and paragraph. In principle, the structure of paper can influence the quality of the summary. Therefore, we employ the structure of paper to leverage contextual information among sentences in paragraphs to generate a survey for documents. In particular, we present a neural document structure model for survey generation.We take paragraphs as units, and model sentences in paragraphs, we then employ a hierarchical model to learn structure among sentences, which can be used to select important and informative sentences to generate survey. We evaluate our model on scientific document data set. The experimental results show that our model is effective, and the generated survey is informative and readable.

Keywords

survey generation / contextual information / document structure

Cite this article

Download citation ▾
Huiyan XU, Zhongqing WANG, Yifei ZHANG, Xiaolan WENG, Zhijian WANG, Guodong ZHOU. Document structure model for survey generation using neural network. Front. Comput. Sci., 2021, 15(4): 154325 DOI:10.1007/s11704-020-9366-8

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Mohammad S, Dorr B, Egan M, Hassan A, Muthukrishan P, Qazvinian V, Radev D R, Zajic D. Using citations to generate surveys of scientific paradigms. In: Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics. 2009, 584–592

[2]

Jha R, Finegan-Dollak C, King B, Coke R, Radev D R. Content models for survey generation: a factoid-based evaluation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. 2015, 441–450

[3]

Jha R, Coke R, Radev D R. Surveyor: a system for generating coherent survey articles for scientific topics. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2015, 2167–2173

[4]

Teufel S, Moens M. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational Linguistics, 2002, 28(4): 409–445

[5]

Teufel S. Argumentative zoning: information extraction from scientific text. Dissertation, University of Edinburgh, 1999

[6]

Guo Y, Korhonen A, Poibeau T. A weakly-supervised approach to argumentative zoning of scientific documents. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2011, 273–283

[7]

Tbahriti I, Chichester C, Lisacek F, Ruch P. Using argumentation to retrieve articles with similar citations: an inquiry into improving related articles search in the medline digital library. International Journal ofMedical Informatics, 2006, 75(6): 488–495

[8]

Liakata M, Dobnik S, Saha S, Batchelor C, Rebholz-Schuhmann D. A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2013, 747–757

[9]

Widyantoro D H, Khodra M L, Riyanto B, Aziz E A. A multiclass-based classification strategy for rhetorical sentence categorization from scientific papers. Journal of ICT Research and Applications, 2013, 7(3): 235–249

[10]

Liu H. Automatic argumentative-zoning using Word2vec. 2017, arXiv preprint arXiv: 1703.10152

[11]

Chen J Q, Zhuge H. Automatic generation of related work through summarizing citations. Concurrency and Computation: Practice and Experience, 2019, 31(3): e4261

[12]

Teufel S, Siddharthan A, Batchelor C. Towards discipline-independent argumentative zoning: evidence from chemistry and computational linguistics. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2009, 1493–1502

[13]

Guo Y F, Korhonen A, Silins I, Stenius U. Weakly supervised learning of information structure of scientific abstracts — is it accurate enough to benefit real-world tasks in biomedicine? Bioinformatics, 2011, 27(22): 3179–3185

[14]

Cohan A, Goharian N. Scientific document summarization via citation contextualization and scientific discourse. International Journal on Digital Libraries, 2018, 19(2–3): 287–303

[15]

Namboodiri A M, Jain A K. Document Structure and Layout Analysis. Digital Document Processing. Springer, London, 2007

[16]

Lüngen H, Bärenfänger M, Hilbert M, Lobin H, Puskás C. Discourse Relations and Document Structure. Linguistic Modeling of Information and Markup Languages. Springer, London, 2010

[17]

Mao S, Rosenfeld A, Kanungo T. Document structure analysis algorithms: a literature survey. Proceedings of the SPIE, 2003, 5010: 197–207

[18]

Yang C C, Wang F L. Fractal summarization: summarization based on fractal theory. In: Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. 2003, 391–392

[19]

Wang F L, Yang C C. Impact of document structure on hierarchical summarization. In: Proceedings of the 9th Conference on Asian Digital Libraries. 2006, 459–469

[20]

Wang F L, Yang C C, Shi X. Multi-document summarization for terrorism information extraction. In: Proceedings of the International Conference on Intelligence and Security Informatics. 2006, 602–608

[21]

Yang C C, Wang F L. Hierarchical summarization of large documents. Journal of the American Society for Information Science and Technology, 2008, 59(6): 887–902

[22]

Glaser B, Strauss A L. The discovery of grounded theory: strategies for qualitative research. Nursing Research, 1968, 17(4): 377–380

[23]

Endres-Niggemeyer B, Maier E, Sigel A. How to implement a naturalistic model of abstracting: four core working steps of an expert abstractor. Information Processing and Management, 1995, 31(5): 631–674

[24]

Contractor D, Guo Y F, Korhonen A. Using argumentative zones for extractive summarization of scientific articles. In: Proceedings of the 24th International Conference on Computational Linguistics. 2012, 663–678

[25]

Collins E, Augenstein I, Riedel S. A supervised approach to extractive summarisation of scientific papers. In: Proceedings of the 21st Conference on Computational Natural Language Learning. 2017, 195–205

[26]

Xiao W, Carenini G. Extractive summarization of long documents by combining global and local context. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 3009–3019

[27]

Mikolov T, Sutskever I, Chen K, Corrado G S, Dean J. Distributed representations of words and phrases and their compositionality. In: Proceedings of the 27th Annual Conference on Neural Information Processing Systems. 2013, 3111–3119

[28]

Graves A. Supervised sequence labelling with recurrent neural networks. Disseration, Technical University of Munich, Germany, 2008

[29]

Gers F A, Schmidhuber J, Cummins F A. Learning to forget: continual prediction with LSTM. Neural Computation, 2000, 12(10): 2451–2471

[30]

Gers F A, Schraudolph N N, Schmidhuber J. Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research, 2002, 3: 115–143

[31]

Ruben Z, Alicia L D, Javier G D, Doroteo T T, Joaquin G R, Ian M. Language identification in short utterances using long short-term memory (LSTM) recurrent neural networks. PLoS ONE, 2016, 11(1): e0146917

[32]

Bengio Y, Schwenk H, Senécal J S, Morin F, Gauvain J L. Neural probabilistic language models. Journal of Machine Learning Research, 2003, 3(6): 1137–1155

[33]

Mikolov T, Zweig G. Context dependent recurrent neural network language model. In: Proceedings of IEEE Spoken Language Technology Workshop. 2012, 234–239

[34]

Le Q V, Mikolov T. Distributed representations of sentences and documents. In: Proceedings of the 31st International Conference on Machine Learning. 2014, 1188–1196

[35]

Lin C Y. ROUGE: a package for automatic evaluation of summaries. In: Proceedings of ACL Workshop on Text Summarization Branches Out. 2004, 74–81

[36]

Nallapati R, Zhai F, Zhou B W. SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. 2017, 3075–3081

[37]

Yao J G, Wan X J, Xiao J G. Recent advances in document summarization. Knowledge and Information Systems, 2017, 53(2): 297–336

[38]

Chen Y C, Bansal M. Fast abstractive summarization with reinforceselected sentence rewriting. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018, 675–686

[39]

Kryscinski W, Keskar N S, Mccann B, Xiong C, Socher R. Neural text summarization: a critical evaluation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 540–551

[40]

Halliday MA K, Hasan R. Cohesion in English. London: Longman, 1976

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (1830KB)

Supplementary files

Article highlights

1394

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/