HACAN: a hierarchical answer-aware and context-aware network for question generation

Ruijun SUN, Hanqin TAO, Yanmin CHEN, Qi LIU

PDF(9533 KB)
PDF(9533 KB)
Front. Comput. Sci. ›› 2024, Vol. 18 ›› Issue (5) : 185321. DOI: 10.1007/s11704-023-2246-2
Artificial Intelligence
RESEARCH ARTICLE

HACAN: a hierarchical answer-aware and context-aware network for question generation

Author information +
History +

Abstract

Question Generation (QG) is the task of generating questions according to the given contexts. Most of the existing methods are based on Recurrent Neural Networks (RNNs) for generating questions with passage-level input for providing more details, which seriously suffer from such problems as gradient vanishing and ineffective information utilization. In fact, reasonably extracting useful information from a given context is more in line with our actual needs during questioning especially in the education scenario. To that end, in this paper, we propose a novel Hierarchical Answer-Aware and Context-Aware Network (HACAN) to construct a high-quality passage representation and judge the balance between the sentences and the whole passage. Specifically, a Hierarchical Passage Encoder (HPE) is proposed to construct an answer-aware and context-aware passage representation, with a strategy of utilizing multi-hop reasoning. Then, we draw inspiration from the actual human questioning process and design a Hierarchical Passage-aware Decoder (HPD) which determines when to utilize the passage information. We conduct extensive experiments on the SQuAD dataset, where the results verify the effectiveness of our model in comparison with several baselines.

Graphical abstract

Keywords

question generation / natural language generation / natural language processing / sequence to sequence

Cite this article

Download citation ▾
Ruijun SUN, Hanqin TAO, Yanmin CHEN, Qi LIU. HACAN: a hierarchical answer-aware and context-aware network for question generation. Front. Comput. Sci., 2024, 18(5): 185321 https://doi.org/10.1007/s11704-023-2246-2

Ruijun Sun received the BS degree in computer science from Hefei University of Technology, China in 2018. He is currently pursuing the PhD degree in the Department of Computer Science and Technology from University of Science and Technology of China (USTC), China. His research interests include natural language processing, data mining, and representation learning

Hanqing Tao received the BS degree in electrical engineering and automation from China University of Mining and Technology, China in 2017. He is currently working toward the PhD degree in the Department of Computer Science and Technology from University of Science and Technology of China (USTC), China. His research interests include data mining, deep learning, natural language processing and representation learning. He has published several papers in referred journals and conference proceedings, such as IEEE TKDE, IEEE TAC, AAAI, ICDM, ICME, and NLPCC

Yanmin Chen is currently pursuing the PhD degree with the School of Computer Science and Technology, University of Science and Technology of China, China. Her research interests include natural language processing, text mining, and user credit mining

Qi Liu received the PhD degree from University of Science and Technology of China (USTC), China in 2013. He is currently a professor in the School of Computer Science and Technology at USTC, China. His general area of research is data mining and knowledge discovery. He has published prolifically in refereed journals and conference proceedings (e.g., TKDE, TOIS, KDD). He is an Associate Editor of IEEE TBD and Neurocomputing. He was the recipient of KDD’ 2018 Best Student Paper Award and ICDM’ 2011 Best Research Paper Award. He was also the recipient of China Outstanding Youth Science Foundation in 2019

References

[1]
Lewis P, Denoyer L, Riedel S. Unsupervised question answering by cloze translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019, 4896−4910
[2]
Shum H Y, He X D, Li D . From Eliza to XiaoIce: challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 2018, 19( 1): 10–26
[3]
Wang Z, Lan A S, Nie W, Waters A E, Grimaldi P J, Baraniuk R G. QG-net: a data-driven question generation model for educational content. In: Proceedings of the 5th Annual ACM Conference on Learning at Scale. 2018, 7
[4]
Serban I V, García-Durán A, Gulcehre C, Ahn S, Chandar S, Courville A, Bengio Y. Generating factoid questions with recurrent neural networks: the 30m factoid question-answer corpus. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016, 588−598
[5]
Du X, Shao J, Cardie C. Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017, 1342−1352
[6]
Zhou Q, Yang N, Wei F, Tan C, Bao H, Zhou M. Neural question generation from text: a preliminary study. In: Proceedings of the 6th National CCF Conference on Natural Language Processing and Chinese Computing. 2018, 662−671
[7]
Bengio Y, Ducharme R, Vincent P, Janvin C . A neural probabilistic language model. The Journal of Machine Learning Research, 2003, 3: 1137–1155
[8]
Mikolov T, Karafiát M, Burget L, Cernocký J, Khudanpur S. Recurrent neural network based language model. In: Proceedings of the 11th Annual Conference of the International Speech Communication Association. 2010, 1045−1048
[9]
Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 3104−3112
[10]
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations. 2015
[11]
Cho K, van Merriënboer B, Bahdanau D, Bengio Y. On the properties of neural machine translation: encoder-decoder approaches. In: Proceedings of the 8th Workshop on Syntax, Semantics and Structure in Statistical Translation. 2014, 103−111
[12]
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 2672−2680
[13]
Yu L, Zhang W, Wang J, Yu Y. SeqGAN: sequence generative adversarial nets with policy gradient. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. 2017, 2852−2858
[14]
Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J, , . Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159
[15]
Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2019, 7871−7880
[16]
Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le Q V. XLNet: generalized autoregressive pretraining for language understanding. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 517
[17]
Rivera-Trigueros I . Machine translation systems and quality assessment: a systematic review. Language Resources and Evaluation, 2022, 56( 2): 593–619
[18]
Haddow B, Bawden R, Barone A V M, Helcl J, Birch A . Survey of low-resource machine translation. Computational Linguistics, 2022, 48( 3): 673–732
[19]
Rush A M, Chopra S, Weston J. A neural attention model for abstractive sentence summarization. In: Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing. 2015, 379−389
[20]
Wang S, Zhao X, Li B, Ge B, Tang D. Integrating extractive and abstractive models for long text summarization. In: Proceedings of 2017 IEEE International Congress on Big Data. 2017, 305−312
[21]
Ma C, Zhang W E, Guo M, Wang H, Sheng Q Z . Multi-document summarization via deep learning techniques: a survey. ACM Computing Surveys, 2023, 55( 5): 102
[22]
Karpathy A, Fei-Fei L. Deep visual-semantic alignments for generating image descriptions. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. 2015, 3128−3137
[23]
Hu X, Gan Z, Wang J, Yang Z, Liu Z, Lu Y, Wang L. Scaling up vision-language pretraining for image captioning. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 17980−17989
[24]
Wang T, Zhang R, Lu Z, Zheng F, Cheng R, Luo P. End-to-end dense video captioning with parallel decoding. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 6847−6857
[25]
Gao L, Guo Z, Zhang H, Xu X, Shen H T . Video captioning with attention-based LSTM and semantic consistency. IEEE Transactions on Multimedia, 2017, 19( 9): 2045–2055
[26]
Zhang C, Zhou J, Zang X, Xu Q, Yin L, He X, Liu L, Xiong H, Dou D. CHASE: commonsense-enriched advertising on search engine with explicit knowledge. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021, 4352−4361
[27]
Zhou M, Zhou J, Fu Y, Ren Z, Wang X, Xiong H. Description generation for points of interest. In: Proceedings of the 37th IEEE International Conference on Data Engineering. 2021, 2213−2218
[28]
La Quatra M, Cagliero L. End-to-end training for financial report summarization. In: Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation. 2020, 118−123
[29]
Subramanian S, Wang T, Yuan X, Zhang S, Trischler A, Bengio Y. Neural models for key phrase extraction and question generation. In: Proceedings of the Workshop on Machine Reading for Question Answering. 2018, 78−88
[30]
Rus V, Wyse B, Piwek P, Lintean M, Stoyanchev S, Moldovan C. The first question generation shared task evaluation challenge. In: Proceedings of the 6th International Natural Language Generation Conference. 2010
[31]
Heilman M, Smith N A. Good question! statistical ranking for question generation. In: Proceedings of the Human Language Technologies: the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 2010, 609−617
[32]
Rajpurkar P, Zhang J, Lopyrev K, Liang P. SQuAD: 100, 000+ questions for machine comprehension of text. In: Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing. 2016, 2383−2392
[33]
Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L. MS MARCO: a human generated MAchine reading COmprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016 Co-Located with the 30th Annual Conference on Neural Information Processing Systems. 2016
[34]
Yang Z, Qi P, Zhang S, Bengio Y, Cohen W, Salakhutdinov R, Manning C D. HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 2369−2380
[35]
Zhao Y, Ni X, Ding Y, Ke Q. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 3901−3910
[36]
Wang Y, Zheng J, Liu Q, Zhao Z, Xiao J, Zhuang Y. Weak supervision enhanced generative network for question generation. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 3806−3812
[37]
Sun X, Liu J, Lyu Y, He W, Ma Y, Wang S. Answer-focused and position-aware neural question generation. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 3930−3939
[38]
Zhou W, Zhang M, Wu Y. Question-type driven question generation. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 6032−6037
[39]
Gong Y, Bowman S. Ruminating reader: reasoning with gated multi-hop attention. In: Proceedings of the Workshop on Machine Reading for Question Answering. 2018, 1−11
[40]
See A, Liu P J, Manning C D. Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017, 1073−1083
[41]
Papineni K, Roukos S, Ward T, Zhu W J. BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. 2002, 311−318
[42]
Denkowski M, Lavie A. Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the 9th Workshop on Statistical Machine Translation. 2014, 376−380
[43]
Lin C Y, Och F J. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. 2004, 605-es
[44]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 6000−6010
[45]
Jia X, Zhou W, Sun X, Wu Y. How to ask good questions? Try to leverage paraphrases. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 6130−6140
[46]
Yao K, Zhang L, Luo T, Tao L, Wu Y. Teaching machines to ask questions. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. 2018, 4546−4552
[47]
Pennington J, Socher R, Manning C. GloVe: global vectors for word representation. In: Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. 2014, 1532−1543

Acknowledgements

This research was partially supported by the National Key R&D Program of China (No. 2021YFF0901003).

RIGHTS & PERMISSIONS

2024 Higher Education Press
AI Summary AI Mindmap
PDF(9533 KB)

Accesses

Citations

Detail

Sections
Recommended

/