Stance detection via sentiment information and neural network model

Qingying SUN , Zhongqing WANG , Shoushan LI , Qiaoming ZHU , Guodong ZHOU

Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (1) : 127 -138.

PDF (581KB)
Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (1) : 127 -138. DOI: 10.1007/s11704-018-7150-9
RESEARCH ARTICLE

Stance detection via sentiment information and neural network model

Author information +
History +
PDF (581KB)

Abstract

Stance detection aims to automatically determine whether the author is in favor of or against a given target. In principle, the sentiment information of a post highly influences the stance. In this study, we aim to leverage the sentiment information of a post to improve the performance of stance detection. However, conventional discretemodels with sentimental features can cause error propagation. We thus propose a joint neural network model to predict the stance and sentiment of a post simultaneously, because the neural network model can learn both representation and interaction between the stance and sentiment collectively. Specifically, we first learn a deep shared representation between stance and sentiment information, and then use a neural stacking model to leverage sentimental information for the stance detection task. Empirical studies demonstrate the effectiveness of our proposed joint neural model.

Keywords

natural language processing / machine learning / stance detection

Cite this article

Download citation ▾
Qingying SUN, Zhongqing WANG, Shoushan LI, Qiaoming ZHU, Guodong ZHOU. Stance detection via sentiment information and neural network model. Front. Comput. Sci., 2019, 13(1): 127-138 DOI:10.1007/s11704-018-7150-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Mohammad S M, Kiritchenko S, Sobhani P, Zhu X, Cherry C. Semeval-2016 task 6: detecting stance in tweets. In: Proceedings of the 10th International Workshop on Semantic Evaluation. 2016, 31–41

[2]

Somasundaran S, Wiebe J. Recognizing stances in online debates. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. 2009, 226–234

[3]

Murakami A, Raymond R. Support or oppose?: classifying positions in online debates from reply activities and opinion expressions. In: Proceedings of the 23rd International Conference on Computational Linguistics. 2010, 869–875

[4]

Anand P, Walker M, Abbott R, Tree J E, Bowmani R, Minor M. Cats rule and dogs drool!: classifying stance in online debate. In: Proceedings of the 2nd workshop on computational approaches to subjectivity and sentiment analysis. 2011, 1–9

[5]

Walker M A, Anand P, Abbott R, Grant R. Stance classification using dialogic properties of persuasion. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2012, 592–596

[6]

Hasan K S, Ng V. Stance classification of ideological debates: data, models, features, and constraints. In: Proceedings of the International Joint Conference on Natural Language Processing. 2013, 1348–1356

[7]

Sun Q, Wang Z, Zhu Q, Zhou G. Exploring various linguistic features for stance detection. In: Proceedings of the International Conference on Computer Processing of Oriental Languages. 2016, 840–847

[8]

Mohammad S M, Sobhani P, Kiritchenko S. Stance and sentiment in tweets. 2016, arXiv preprint arXiv:1605.01655

[9]

Thomas M, Pang B, Lee L. Get out the vote: determining support or opposition from congressional floor-debate transcripts. In: Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. 2006, 327–335

[10]

Bansal M, Cardie C, Lee L. The power of negative thinking: exploiting label disagreement in the min-cut classification framework. COLING 2008: Companion Volume: Posters. 2008, 15–18

[11]

Burfoot C, Bird S, Baldwin T. Collective classification of congressional floor-debate transcripts. In: Proceedings of the 49th AnnualMeeting of the Association for Computational Linguistics. 2011, 1506–1515

[12]

Agrawal R, Rajagopalan S, Srikant R, Xu Y. Mining newsgroups using networks arising from social behavior. In: Proceedings of the 12th International Conference on World Wide Web. 2003, 529–535

[13]

Sridhar D, Getoor L, Walker M. Collective stance classification of posts in online debate forums. In: Proceedings of the Joint Workshop on Social Dynamics and Personal Attributes in Social Media. 2014, 109–117

[14]

Johnson K, Goldwasser D. Identifying stance by analyzing political discourse on twitter. In: Proceedings of EMNLP Workshop on Natural Language Processing and Computational Social Science. 2016, 66–75

[15]

Volkova S, Bachrach Y, Armstrong M, Sharma V. Inferring latent user properties from texts published in social media. In: Proceedings of Association for the Advancement of Artificial Intelligence. 2015, 4296–4297

[16]

Lukasik M, Srijith P K, Vu D, Bontcheva K, Zubiaga A, Cohn T. Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016, 393–398

[17]

Zubiaga A, Kochkina E, Liakata M, Procter R, Lukasik M. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. 2016, arXiv preprint arXiv:1609.09028

[18]

Rajadesingan A, Liu H. Identifying users with opposing opinions in Twitter debates. In: Proceedings of the International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction. 2014, 153–160

[19]

Mohammad S M, Kiritchenko S, Sobhani P, Zhu X, Cherry C. A dataset for detecting stance in tweets. In: Proceedings of the 10th edition of the the Language Resources and Evaluation Conference (LREC). 2016, 3945–3952

[20]

Zarrella G, Marsh A. MITRE at semeval-2016 task 6: transfer learning for stance detection. In: Proceedings of the 10th International Workshop on Semantic Evaluation. 2016, 458–463

[21]

Wei W, Zhang X, Liu X, Chen W, Wang T. Pkudblab at semeval-2016 task 6: a specific convolutional neural network system for effective stance detection. In: Proceedings of the 10th International Workshop on Semantic Evaluation. 2016, 384–388

[22]

Pang B, Lee L, Vaithyanathan S. Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2002, 79–86

[23]

Yessenalina A, Yue Y, Cardie C. Multi-level structured models for document-level sentiment classification. In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. 2010, 1046–1056

[24]

Brychcın T, Habernal I. Unsupervised improving of sentiment analysis using global target context. In: Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP. 2013, 122–128

[25]

Tang D, Qin B, Liu T. Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2015, 1422–1432

[26]

Khattri A, Joshi A, Bhattacharyya P, Carman M J. Your sentiment precedes you: using an author’s historical tweets to predict sarcasm. In: Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2015, 25–30

[27]

Saif H, He Y, Fernandez M, Alani H. Contextual semantics for sentiment analysis of Twitter. Information Processing & Management. 2016, 52(1): 5–19

[28]

Sobhani P, Mohammad S M, Kiritchenko S. Detecting stance in tweets and analyzing its interaction with sentiment. In: Proceedings of the 5th Joint Conference on Lexical and Computational Semantics. 2016, 159–169

[29]

Titov I, McDonald R T. A joint model of text and aspect ratings for sentiment summarization. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2008, 308–316

[30]

Watanabe Y, Asahara M, Matsumoto Y. A structured model for joint learning of argument roles and predicate senses. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2010, 98–102

[31]

Simova I, Vasilev D, Popov A, Simov K, Osenova P. Joint ensemble model for POS tagging and dependency parsing. In: Proceedings of the 1st Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. 2014, 15–25

[32]

Socher R, Perelygin A, Wu J Y, Chuang J, Manning C D, Ng A Y, Potts C. Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2013, 1631–1642

[33]

Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. Journal of Machine Learning Research. 2011, 12(Aug): 2493–2537

[34]

Liu Y, Li S, Zhang X, Sui Z. Implicit discourse relation classification via multi-task neural networks. 2016, arXiv preprint arXiv:1603.02776

[35]

Zhou J, Xu W. End-to-end learning of semantic role labeling using recurrent neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. 2015, 1127–1137

[36]

Chen H, Zhang Y, Liu Q. Neural network for heterogeneous annotations. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2016, 731–741

[37]

Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780

[38]

Graves A. Generating sequences with recurrent neural networks. 2013, arXiv preprint arXiv:1308.0850

[39]

Johnson R, Zhang T. Effective use of word order for text categorization with convolutional neural networks. 2014, arXiv preprint arXiv:1412.1058

[40]

Srivastava N, Hinton G E, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15(1): 1929–1958

[41]

Tieleman T, Hinton G. Rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. Technical Report, 2012

[42]

Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artifical Intelligence and Statistics. 2010, 249–256

[43]

Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. 2013, arXiv preprint arXiv:1301.3781

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature

AI Summary AI Mindmap
PDF (581KB)

Supplementary files

Supplementary Material

2243

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/