Monday, January 13, 2020

Sentiment and position-taking analysis of parliamentary debates: A systematic literature review

 


Abstract

Parliamentary and legislative debate transcripts provide access to information concerning the opinions, positions and policy preferences of elected politicians. 

They attract attention from researchers from a wide variety of backgrounds, from political and social sciences to computer science. 

As a result, the problem of automatic sentiment and position-taking analysis has been tackled from different perspectives, using varying approaches and methods, and with relatively little collaboration or cross-pollination of ideas. 

The existing research is scattered across publications from various fields and venues. 

In this article we present the results of a systematic literature review of 61 studies, all of which address the automatic analysis of the sentiment and opinions expressed and positions taken by speakers in parliamentary (and other legislative) debates. 

In this review, we discuss the available research with regard to the aims and objectives of the researchers who work on these problems, the automatic analysis tasks they undertake, and the approaches and methods they use. 

We conclude by summarizing their findings, discussing the challenges of applying computational analysis to parliamentary debates, and suggesting possible avenues for further research.

REFERENCES:

1. Abercrombie, G., & Batista-Navarro, R. (2018). ‘Aye’ or ‘no’? Speech-level sentiment analysis of

Hansard UK parliamentary debate transcripts. In: Proceedings of the eleventh international confer-

ence on language resources and evaluation (LREC-2018). European Languages Resources Associa-

tion (ELRA), Miyazaki, Japan. https://www.aclweb.org/anthology/L18-1659.


2. Abercrombie, G., & Batista-Navarro, R.T. (2018). Identifying opinion-topics and polarity of parlia-

mentary debate motions. In: Proceedings of the 9th workshop on computational approaches to sub-

jectivity, sentiment and social media analysis. Association for Computational Linguistics, Brussels,

Belgium (pp. 280–285). https://doi.org/10.18653/v1/W18-6241. https://www.aclweb.org/anthology/

W18-6241.


3. Ahmadalinezhad, M., & Makrehchi, M. (2018). Detecting agreement and disagreement in political

debates. In R. Thomson, C. Dancy, A. Hyder, & H. Bisgin (Eds.), Social, cultural, and behavioral

modeling (pp. 54–60). Cham: Springer.


4. Akhmedova, S., Semenkin, E., & Stanovov, V. (2018). Co-operation of biology related algorithms

for solving opinion mining problems by using diferent term weighting schemes. In: K. Madani,

D. Peaucelle, O. Gusikhin (Eds.) Informatics in control, automation and robotics: 13th international

conference, ICINCO 2016 Lisbon, Portugal, 29-31 July, 2016 (pp. 73–90). Cham: Springer. https://

doi.org/10.1007/978-3-319-55011-4_4.


5. Allison, B. (2008). Sentiment detection using lexically-based classifers. In P. Sojka, A. Horák, I.

Kopeček, & K. Pala (Eds.), Text, speech and dialogue (pp. 21–28). Berlin: Springer.


6. Balahur, A., Kozareva, Z., & Montoyo, A. (2009). Determining the polarity and source of opinions

expressed in political debates. In A. Gelbukh (Ed.), Computational linguistics and intelligent text

processing (pp. 468–480). Berlin: Springer.


7. Bansal, M., Cardie, C., & Lee, L. (2008). The power of negative thinking: Exploiting label disagree-

ment in the min-cut classifcation framework. In: Coling 2008: Companion volume: Posters (pp.

15–18). Coling 2008 Organizing Committee, Manchester, UK. https://www.aclweb.org/anthology/

C08-2004.


8. Baturo, A., Dasandi, N., & Mikhaylov, S. J. (2017). Understanding state preferences with text as

data: Introducing the un general debate corpus. Research and Politics, 4(2), 2053168017712821.

https://doi.org/10.1177/2053168017712821.


9. Bhatia, S., P, D. (2018). Topic-specifc sentiment analysis can help identify political ideology. In:

Proceedings of the 9th workshop on computational approaches to subjectivity, sentiment and social

media analysis (pp. 79–84). Association for Computational Linguistics, Brussels, Belgium. https://

doi.org/10.18653/v1/W18-6212. https://www.aclweb.org/anthology/W18-6212.


10. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine

Learning Research, 3(Jan), 993–1022.


11. Bonica, A. (2016). A data-driven voter guide for US elections: Adapting quantitative measures of

the preferences and priorities of political elites to help voters learn about candidates. Journal of the

Social Sciences, 2(7), 11–32. https://doi.org/10.7758/RSF.2016.2.7.02. https://www.rsfournal.org/

content/2/7/11.


12. Budhwar, A., Kuboi, T., Dekhtyar, A., & Khosmood, F. (2018). Predicting the vote using legis-

lative speech. In: Proceedings of the 19th annual international conference on digital government

research: governance in the data age, dg.o ’18 (pp. 35:1–35:10). ACM, New York, NY, USA. https

://doi.org/10.1145/3209281.3209374.


13. Burfoot, C. (2008). Using multiple sources of agreement information for sentiment classifcation of

political transcripts. In: Proceedings of the Australasian language technology association workshop

2008 (pp. 11–18). Hobart, Australia. https://www.aclweb.org/anthology/U08-1003.


14. Burfoot, C., Bird, S., & Baldwin, T. (2011). Collective classifcation of congressional foor-debate

transcripts. In: Proceedings of the 49th annual meeting of the association for computational linguis-

tics: Human language technologies (pp. 1506–1515). Association for Computational Linguistics,

Portland, Oregon, USA. https://www.aclweb.org/anthology/P11-1151.


15. Burford, C., Bird, S., & Baldwin, T. (2015). Collective document classifcation with implicit inter-

document semantic relationships. In: Proceedings of the fourth joint conference on lexical and com-

putational semantics (pp. 106–116). Association for Computational Linguistics, Denver, Colorado.

https://doi.org/10.18653/v1/S15-1012. https://www.aclweb.org/anthology/S15-1012.


16. Chen, W., Zhang, X., Wang, T., Yang, B., & Li, Y. (2017). Opinion-aware knowledge graph for

political ideology detection. In: Proceedings of the 26th international joint conference on artifcial

intelligence, pp. 3647–3653.


17. Diermeier, D., Godbout, J. F., Yu, B., & Kaufmann, S. (2012). Language and ideology in congress.

British Journal of Political Science, 42(1), 31–55.


18. Duthie, R., & Budzynska, K. (2018). A deep modular rnn approach for ethos mining. In: Proceed-

ings of the twenty-seventh international joint conference on artifcial intelligence, (IJCAI-18), pp.

4041–4047.


19. Dzieciątko, M. (2019). Application of text analytics to analyze emotions in the speeches. In E.

Pietka, P. Badura, J. Kawa, & W. Wieclawek (Eds.), Information Technology in Biomedicine (pp.

525–536). Cham: Springer.


20. Frid-Nielsen, S. S. (2018). Human rights or security? Positions on asylum in european parliament

speeches. European Union Politics, 19(2), 344–362. https://doi.org/10.1177/1465116518755954.


21. Glavaš, G., Nanni, F., & Ponzetto, S.P. (2017). Unsupervised cross-lingual scaling of political texts.

In: Proceedings of the 15th conference of the European chapter of the association for computa-

tional linguistics: Volume 2, short papers (pp. 688–693). Association for Computational Linguis-

tics, Valencia, Spain. https://www.aclweb.org/anthology/E17-2109.


22. Glavaš, G., Nanni, F., & Ponzetto, S.P. (2019). Computational analysis of political texts: Bridging

research eforts across communities. In: Proceedings of the 57th annual meeting of the association

for computational linguistics: Tutorial abstracts (pp. 18–23). Association for Computational Lin-

guistics, Florence, Italy. https://doi.org/10.18653/v1/P19-4004. https://www.aclweb.org/anthology/

P19-4004.


23. Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content

analysis methods for political texts. Political Analysis, 21(3), 267–297.


24. Hirst, G., Riabinin, Y., & Graham, J. (2010). Party status as a confound in the automatic classifca-

tion of political speech by ideology. In: Proceedings of 10th international conference on statistical

analysis of textual data/10es Journées internationales d’Analyse statistique des Données Textuelles

(JADT 2010), Rome, pp. 731–742.


25. Honkela, T., Korhonen, J., Lagus, K., & Saarinen, E. (2014). Five-dimensional sentiment analysis

of corpora, documents and words. In T. Villmann, F. M. Schleif, M. Kaden, & M. Lange (Eds.),

Advances in self-organizing maps and learning vector quantization (pp. 209–218). Cham: Springer.


26. Hopkins, D. J., & King, G. (2010). A method of automated nonparametric content analysis for

social science. American Journal of Political Science, 54(1), 229–247. https://doi.org/10.111

1/j.1540-5907.2009.00428.x.


27. Iliev, I. R., Huang, X., & Gel, Y. R. (2019). Political rhetoric through the lens of non-parametric sta-

tistics: Are our legislators that diferent? Journal of the Royal Statistical Society Series A (Statistics

in Society), 182(2), 583–604. https://doi.org/10.1111/rssa.12421.


28. Iyyer, M., Enns, P., Boyd-Graber, J., & Resnik, P. (2014). Political ideology detection using recur-

sive neural networks. In: Proceedings of the 52nd annual meeting of the association for computa-

tional linguistics (Volume 1: Long Papers) (pp. 1113–1122). Association for Computational Lin-

guistics, Baltimore, Maryland. https://doi.org/10.3115/v1/P14-1105. https://www.aclweb.org/antho

logy/P14-1105


29. Jensen, J., Naidu, S., Kaplan, E., Wilse-Samson, L., Gergen, D., Zuckerman, M., & Spirling, A.

(2012). Political polarization and the dynamics of political language: Evidence from 130 years of

partisan speech [with comments and discussion]. Brookings Papers on Economic Activity, pp. 1–81.


30. Ji, Y., & Smith, N.A. (2017) Neural discourse structure for text categorization. In: Proceedings of

the 55th annual meeting of the association for computational linguistics (Volume 1: Long Papers)

(pp. 996–1005). Association for Computational Linguistics, Vancouver, Canada. https://doi.

org/10.18653/v1/P17-1092. https://www.aclweb.org/anthology/P17-1092.


31. Kaal, B., Maks, I., & van Elfrinkhof, A. (2014). From text to political positions: Text analysis across

disciplines (Vol. 55). Philadelphia: John Benjamins Publishing Company.


32. Kapočiūtė-Dzikienė, J., & Krupavičius, A. (2014). Predicting party group from the Lithuanian par-

liamentary speeches. Information Technology and Control, 43(3), 321–332.


33. Kaufman, D., Khosmood, F., Kuboi, T., & Dekhtyar, A. (2018). Learning alignments from legisla-

tive discourse. In: Proceedings of the 19th annual international conference on digital government

research: Governance in the data age, dg.o ’18 (pp. 119:1–119:2). ACM, New York, NY, USA.

https://doi.org/10.1145/3209281.3209413.


34. Kim, I. S., Londregan, J., & Ratkovic, M. (2018). Estimating spatial preferences from votes and text.

Political Analysis, 26(2), 210–229.


35. Lapponi, E., Søyland, M. G., Velldal, E., & Oepen, S. (2018). The talk of norway: A richly anno-

tated corpus of the norwegian parliament, 1998–2016. Language Resources and Evaluation, 52(3),

873–893. https://doi.org/10.1007/s10579-018-9411-5.


36. Laver, M., Benoit, K., & Garry, J. (2003). Extracting policy positions from political texts using

words as data. American Political Science Review, 97(2), 311–331.


37. Lefait, G., & Kechadi, T. (2010). Analysis of deputy and party similarities through hierarchical

clustering. In: 2010 fourth international conference on digital society (pp. 264–268). https://doi.

org/10.1109/ICDS.2010.49.


38. Li, X., Chen, W., Wang, T., & Huang, W. (2017). Target-specifc convolutional bi-directional lstm

neural network for political ideology analysis. In L. Chen, C. S. Jensen, C. Shahabi, X. Yang, & X.

Lian (Eds.), Web and Big Data (pp. 64–72). Cham: Springer.


39. Liu, B. (2012). Sentiment analysis and opinion mining, synthesis lectures on human language tech-

nologies (Vol. 5). San Rafael: Morgan & Claypool Publishers.


40. Lowe, W., & Benoit, K. (2013). Validating estimates of latent traits from textual data using human

judgment as a benchmark. Political Analysis, 21(3), 298–313.


41. Martineau, J., Finin, T., Joshi, A., & Patel, S. (2009). Improving binary classifcation on text prob-

lems using diferential word features. In: Proceedings of the 18th ACM conference on information

and knowledge management, CIKM ’09 (pp. 2019–2024). ACM, New York, NY, USA. https://doi.

org/10.1145/1645953.1646291.


42. Menini, S., Nanni, F., Ponzetto, S.P., & Tonelli, S. (2017). Topic-based agreement and disagreement

in US electoral manifestos. In: Proceedings of the 2017 conference on empirical methods in natu-

ral language processing (pp. 2938–2944). Association for Computational Linguistics, Copenhagen,

Denmark. https://doi.org/10.18653/v1/D17-1318. https://www.aclweb.org/anthology/D17-1318.


43. Menini, S., & Tonelli, S. (2016). Agreement and disagreement: Comparison of points of view in the

political domain. In: Proceedings of COLING 2016, the 26th international conference on computa-

tional linguistics: Technical papers (pp. 2461–2470). The COLING 2016 Organizing Committee,

Osaka, Japan. https://www.aclweb.org/anthology/C16-1232.


44. Mikhaylov, S., Laver, M., & Benoit, K. (2008). Coder reliability and misclassifcation in compara-

tive manifesto project codings. In: 66th MPSA annual national conference.


45. Mohammad, S. M., Sobhani, P., & Kiritchenko, S. (2017). Stance and sentiment in tweets. ACM

Transactions on Internet Technology, 17(3), 26:1–26:23. https://doi.org/10.1145/3003433.


46. Moher, D., Liberati, A., Tetzlaf, J., & Altman, D. G. (2009). The PRISMA group: Preferred report-

ing items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal

Medicine, 151(4), 264–269. https://doi.org/10.7326/0003-4819-151-4-200908180-00135.


47. Monroe, B. L., Colaresi, M. P., & Quinn, K. M. (2008). Fightin’words: Lexical feature selection and

evaluation for identifying the content of political confict. Political Analysis, 16(4), 372–403.


48. Naderi, N., & Hirst, G. (2016). Argumentation mining in parliamentary discourse. In M. Baldoni,

C. Baroglio, F. Bex, F. Grasso, N. Green, M. R. Namazi-Rad, M. Numao, & M. T. Suarez (Eds.),

Principles and practice of multi-agent systems (pp. 16–25). Cham: Springer.


49. Nanni, F., Zirn, C., Glavaš, G., Eichorst, J., & Ponzetto, S.P. (2016) Topfsh: topic-based analysis of

political position in us electoral campaigns. In: PolText 2016: The international conference on the

advances in computational analysis of political text: proceedings of the conference.


50. Nguyen, V.A., Boyd-Graber, J., Resnik, P., & Miler, K. (2015). Tea party in the house: A hierarchi-

cal ideal point topic model and its application to republican legislators in the 112th congress. In:

Proceedings of the 53rd annual meeting of the association for computational linguistics and the

7th international joint conference on natural language processing (Volume 1: Long papers) (pp.

1438–1448). Association for Computational Linguistics, Beijing, China. https://doi.org/10.3115/v1/

P15-1139. https://www.aclweb.org/anthology/P15-1139.


51. Nguyen, V. A., Ying, J. L., & Resnik, P. (2013). Lexical and hierarchical topic regression. In C. J.

C. Burges, L. Bottou, M. Welling, Z. Ghahramani, & K. Q. Weinberger (Eds.), Advances in neural

information processing systems 26 (pp. 1106–1114). Curran Associates Inc. http://papers.nips.cc/

paper/5163-lexical-and-hierarchical-topic-regression.pdf.


52. Onyimadu, O., Nakata, K., Wilson, T., Macken, D., & Liu, K. (2014). Towards sentiment analysis

on parliamentary debates in hansard. In W. Kim, Y. Ding, & H. G. Kim (Eds.), Semantic technology

(pp. 48–50). Cham: Springer.


53. Owen, E. (2017). Exposure to ofshoring and the politics of trade liberalization: Debate and votes on

free trade agreements in the US house of representatives, 2001–2006. International Studies Quar-

terly, 61(2), 297–311. https://doi.org/10.1093/isq/sqx020.


54. Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends® in

Information Retrieval, 2(1–2), 1–135. https://doi.org/10.1561/1500000011.


55. Plantié, M., Roche, M., Dray, G., & Poncelet, P. (2008). Is a voting approach accurate for opinion

mining? In I. Y. Song, J. Eder, & T. M. Nguyen (Eds.), Data warehousing and knowledge discovery

(pp. 413–422). Berlin: Springer.


56. Proksch, S. O., Lowe, W., Wäckerle, J., & Soroka, S. (2019). Multilingual sentiment analysis: A

new approach to measuring confict in legislative speeches. Legislative Studies Quarterly, 44(1),

97–131. https://doi.org/10.1111/lsq.12218.


57. Proksch, S. O., & Slapin, J. B. (2010). Position taking in European parliament speeches. British

Journal of Political Science, 40(3), 587–611.


58. Proksch, S. O., & Slapin, J. B. (2015). The politics of parliamentary debate. Cambridge: Cambridge

University Press.


59. Quirk, R., Greenbaum, S., Leech, G., & Svartvik, J. (1985). A comprehensive grammar of the eng-

lish language. London: Longman.


60. Rauh, C. (2018). Validating a sentiment dictionary for german political language—a workbench

note. Journal of Information Technology and Politics, 15(4), 319–343. https://doi.org/10.1080/19331

681.2018.1485608.


61. Rheault, L. (2016) Expressions of anxiety in political texts. In Proceedings of the frst workshop on

nlp and computational social science (pp. 92–101). Association for Computational Linguistics, Aus-

tin, Texas. https://doi.org/10.18653/v1/W16-5612. https://www.aclweb.org/anthology/W16-5612.


62. Rheault, L., Beelen, K., Cochrane, C., & Hirst, G. (2016). Measuring emotion in parliamentary

debates with automated textual analysis. PLoS One, 11(12), 1–18. https://doi.org/10.1371/journ

al.pone.0168843.


63. Richards, L. (2005). Handling qualitative data: A practical guide. London: Sage Publications.


64. Rudkowsky, E., Haselmayer, M., Wastian, M., Jenny, M., Emrich, Š., & Sedlmair, M. (2018). More

than bags of words: Sentiment analysis with word embeddings. Communication Methods and Meas-

ures, 12(2–3), 140–157. https://doi.org/10.1080/19312458.2018.1455817.


65. Sakamoto, T., & Takikawa, H. (2017). Cross-national measurement of polarization in political dis-

course: Analyzing foor debate in the US the japanese legislatures. In 2017 IEEE international con-

ference on big data (Big Data) (pp. 3104–3110). https://doi.org/10.1109/BigData.2017.8258285.


66. Salah, Z. (2014). Machine learning and sentiment analysis approaches for the analysis of parlia-

mentary debates. Ph.D. thesis, University of Liverpool.


67. Salah, Z., Coenen, F., & Grossi, D. (2013). Extracting debate graphs from parliamentary transcripts:

A study directed at uk house of commons debates. In Proceedings of the fourteenth international

conference on artifcial intelligence and law, ICAIL ’13 (pp. 121–130). ACM, New York, NY, USA.

https://doi.org/10.1145/2514601.2514615.


68. Salah, Z., Coenen, F., & Grossi, D. (2013). Generating domain-specifc sentiment lexicons for opin-

ion mining. In H. Motoda, Z. Wu, L. Cao, O. Zaiane, M. Yao, & W. Wang (Eds.), Advanced data

mining and applications (pp. 13–24). Berlin: Springer.


69. Schwarz, D., Traber, D., & Benoit, K. (2017). Estimating intra-party preferences: Comparing

speeches to votes. Political Science Research and Methods, 5(2), 379–396.


70. Seligman, M. E. P. (2012). Flourish: A visionary new understanding of happiness and well-being.

New York: Simon and Schuster.


71. Sim, Y., Acree, B.D.L., Gross, J.H., & Smith, N.A. (2013). Measuring ideological proportions in

political speeches. In Proceedings of the 2013 conference on empirical methods in natural language

processing (pp. 91–101). Association for Computational Linguistics, Seattle, Washington, USA.

https://www.aclweb.org/anthology/D13-1010.


72. Sokolova, M., & Lapalme, G. (2008). Verbs speak loud: Verb categories in learning polarity and strength

of opinions. In S. Bergler (Ed.), Advances in artifcial intelligence (pp. 320–331). Berlin: Springer.


73. Taddy, M. (2013). Multinomial inverse regression for text analysis. Journal of the American Statisti-

cal Association, 108(503), 755–770.


74. Thomas, M., Pang, B., & Lee, L. (2006). Get out the vote: Determining support or opposition from

congressional foor-debate transcripts. In Proceedings of the 2006 conference on empirical methods

in natural language processing (pp. 327–335). Association for Computational Linguistics, Sydney,

Australia. https://www.aclweb.org/anthology/W06-1639.


75. van der Zwaan, J.M., Marx, M., & Kamps, J. (2016). Validating cross-perspective topic modeling

for extracting political parties’ positions from parliamentary proceedings. In Proceedings of the

twenty-second European conference on artifcial intelligence, ECAI’16 (pp. 28–36). IOS Press,

Amsterdam, The Netherlands, The Netherlands. https://doi.org/10.3233/978-1-61499-672-9-28.


76. Vilares, D., & He, Y. (2017). Detecting perspectives in political debates. In Proceedings of the 2017

conference on empirical methods in natural language processing (pp. 1573–1582). Association for

Computational Linguistics, Copenhagen, Denmark. https://doi.org/10.18653/v1/D17-1165. https://

www.aclweb.org/anthology/D17-1165.


77. Yadollahi, A., Shahraki, A. G., & Zaiane, O. R. (2017). Current state of text sentiment analy-

sis from opinion to emotion mining. ACM Computing Surveys, 50(2), 25:1–25:33. https://doi.

org/10.1145/3057270.


78. Yessenalina, A., Yue, Y., & Cardie, C. (2010). Multi-level structured models for document-level

sentiment classifcation. In Proceedings of the 2010 conference on empirical methods in natural

language processing (pp. 1046–1056). Association for Computational Linguistics, Cambridge, MA.

https://www.aclweb.org/anthology/D10-1102.


79. Yogatama, D., Kong, L., & Smith, N.A. (2015). Bayesian optimization of text representations. In

Proceedings of the 2015 conference on empirical methods in natural language processing (pp.

2100–2105). Association for Computational Linguistics, Lisbon, Portugal. https://doi.org/10.18653/

v1/D15-1251. https://www.aclweb.org/anthology/D15-1251.


80. Yogatama, D., & Smith, N. (2014). Making the most of bag of words: Sentence regularization with

alternating direction method of multipliers. In International conference on machine learning, pp.

656–664.


81. Yogatama, D., & Smith, N.A. (2014). Linguistic structured sparsity in text categorization. In Pro-

ceedings of the 52nd annual meeting of the association for computational linguistics (Volume 1:

Long Papers) (pp. 786–796). Association for Computational Linguistics, Baltimore, Maryland. https

://doi.org/10.3115/v1/P14-1074. https://www.aclweb.org/anthology/P14-1074.



https://link.springer.com/article/10.1007%2Fs42001-019-00060-w