Associations between lyric and musical depth in Chinese songs: Evidence from computational modeling

Liang Xu , Bingfei Xu , Zaoyi Sun , Hongting Li

Psych Journal ›› 2024, Vol. 13 ›› Issue (6) : 915 -926.

PDF
Psych Journal ›› 2024, Vol. 13 ›› Issue (6) : 915 -926. DOI: 10.1002/pchj.785
ORIGINAL ARTICLE

Associations between lyric and musical depth in Chinese songs: Evidence from computational modeling

Author information +
History +
PDF

Abstract

Musical depth, which encompasses the intellectual and emotional complexity of music, is a robust dimension that influences music preference. However, there remains a dearth of research exploring the relationship between lyrics and musical depth. This study addressed this gap by analyzing linguistic inquiry and word count-based lyric features extracted from a comprehensive dataset of 2372 Chinese songs. Correlation analysis and machine learning techniques revealed compelling connections between musical depth and various lyric features, such as the usage frequency of emotion words, time words, and insight words. To further investigate these relationships, prediction models for musical depth were constructed using a combination of audio and lyric features as inputs. The results demonstrated that the random forest regressions (RFR) that integrated both audio and lyric features yielded superior prediction performance compared to those relying solely on lyric inputs. Notably, when assessing the feature importance to interpret the RFR models, it became evident that audio features played a decisive role in predicting musical depth. This finding highlights the paramount significance of melody over lyrics in effectively conveying the intricacies of musical depth.

Keywords

audio / lyric / machine learning / musical depth

Cite this article

Download citation ▾
Liang Xu, Bingfei Xu, Zaoyi Sun, Hongting Li. Associations between lyric and musical depth in Chinese songs: Evidence from computational modeling. Psych Journal, 2024, 13(6): 915-926 DOI:10.1002/pchj.785

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Alaei, R., Rule, N. O., & MacDonald, G. (2022). Individuals’ favorite songs’ lyrics reflect their attachment style. Personal Relationships, 29(4), 778–794.

[2]

Ali, S. O., & Peynircioğlu, Z. F. (2006). Songs and emotions: Are lyrics and melodies equal partners? Psychology of Music, 34(4), 511–534.

[3]

Anderson, I., Gil, S., Gibson, C., Wolf, S., Shapiro, W., Semerci, O., & Greenberg, D. M. (2021). “Just the way you are”: Linking music listening on Spotify and personality. Social Psychological and Personality Science, 12(4), 561–572.

[4]

Basiński, K., Zdun-Ryżewska, A., Greenberg, D. M., & Majkowicz, M. (2021). Preferred musical attribute dimensions underlie individual differences in music-induced analgesia. Scientific Reports, 11(1), 8622.

[5]

Batcho, K. I., DaRin, M. L., Nave, A. M., & Yaworsky, R. R. (2008). Nostalgia and identity in song lyrics. Psychology of Aesthetics, Creativity, and the Arts, 2(4), 236–244.

[6]

Bauer, C., & Schedl, M. (2019). Global and country-specific mainstreaminess measures: Definitions, analysis, and usage for improving personalized music recommendation systems. PLoS One, 14(6), e0217389.

[7]

Besson, M., Faita, F., Peretz, I., Bonnel, A. M., & Requin, J. (1998). Singing in the brain: Independence of lyrics and tunes. Psychological Science, 9(6), 494–498.

[8]

Brattico, E., Alluri, V., Bogert, B., Jacobsen, T., Vartiainen, N., Nieminen, S., & Tervaniemi, M. (2011). A functional MRI study of happy and sad emotions in music with and without lyrics. Frontiers in Psychology, 2, 308.

[9]

Che, W., Li, Z., & Liu, T. (2010). LTP: A Chinese language technology platform. In Proceedings of the 23rd international conference on computational linguistics: Demonstrations (pp. 13–16). Association for Computational Linguistics.

[10]

Cui, J., Dong, R., Li, W., & Wang, W. (2021). Research on personality of Netease cloud music user: Based on internet behavior and lyrics data. Journal of Psychological Science, 44(6), 1403–1410.

[11]

Daimi, S. N., Jain, S., & Saha, G. (2020). Effect of familiarity on recognition of pleasant and unpleasant emotional states induced by Hindi music videos. In B. Pati, C. R. Panigrahi, R. Buyya, & K.-C. Li (Eds.), Advanced computing and intelligent engineering (pp. 227–238). Springer.

[12]

DeWall, C. N., Pond, R. S., Jr., Campbell, W. K., & Twenge, J. M. (2011). Tuning in to psychological change: Linguistic markers of psychological traits and emotions over time in popular U.S. song lyrics. Psychology of Aesthetics, Creativity, and the Arts, 5, 200–207.

[13]

Eerola, T., & Vuoskoski, J. K. (2013). A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception, 30(3), 307–340.

[14]

Fell, M., Cabrio, E., Tikat, M., Michel, F., Buffa, M., & Gandon, F. (2023). The WASABI song corpus and knowledge graph for music lyrics analysis. Language Resources and Evaluation, 57(1), 89–119.

[15]

Freeman, B. C. (2012, July 13–15). No Eminems in Asia. What’s in a song? Lyrical analysis of western and Chinese mainstream popular music. In The 3rd inter-Asia popular music studies (IAPMS), Taipei, China. https://www.researchgate.net/publication/272997075_No_Eminems_in_Asia_Analyzing_lyrics_of_Top_40_songs_in_the_East_West

[16]

Fricke, K. R., Greenberg, D. M., Rentfrow, P. J., & Herzberg, P. Y. (2018). Computer-based music feature analysis mirrors human perception and can be used to measure individual music preference. Journal of Research in Personality, 75, 94–102.

[17]

Fricke, K. R., Greenberg, D. M., Rentfrow, P. J., & Herzberg, P. Y. (2021). Measuring musical preferences from listening behavior: Data from one million people and 200, 000 songs. Psychology of Music, 49(3), 371–381.

[18]

Fricke, K. R., & Herzberg, P. Y. (2017). Personality and self-reported preference for music genres and attributes in a German-speaking sample. Journal of Research in Personality, 68, 114–123.

[19]

Gabrielsson, A. (2016). The relationship between musical structure and perceived expression. In S. Hallam, I. Cross, & M. Thaut (Eds.), The Oxford handbook of music psychology (pp. 215–232). Oxford University Press.

[20]

Gabrielsson, A., & Juslin, P. N. (2003). Emotional expression in music. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 503–534). Oxford University Press.

[21]

Gao, R., Hao, B., Li, H., Gao, Y., & Zhu, T. (2013). Developing simplified Chinese psychological linguistic analysis dictionary for microblog. In K. Imamura, S. Usui, T. Shirao, T. Kasamatsu, L. Schwabe, & N. Zhong (Eds.), Brain and health informatics. Lecture notes in computer science (pp. 359–368). Springer International Publishing.

[22]

Gendron, M., Lindquist, K. A., Barsalou, L., & Barrett, L. F. (2012). Emotion words shape emotion percepts. Emotion, 12(2), 314–325.

[23]

Gordon, R. L., Schön, D., Magne, C., Astésano, C., & Besson, M. (2010). Words and melody are intertwined in perception of sung words: EEG and behavioral evidence. PLoS One, 5(3), e9889.

[24]

Greenberg, D. M., Kosinski, M., Stillwell, D. J., Monteiro, B. L., Levitin, D. J., & Rentfrow, P. J. (2016). The song is you: Preferences for musical attribute dimensions reflect personality. Social Psychological and Personality Science, 7(6), 597–605.

[25]

Greenberg, D. M., Matz, S. C., Schwartz, H. A., & Fricke, K. R. (2021). The self-congruity effect of music. Journal of Personality and Social Psychology, 121(1), 137–150.

[26]

Harte, C., Sandler, M., & Gasser, M. (2006). Detecting harmonic change in musical audio. In Proceedings of the 1st ACM workshop on audio and music computing multimedia (pp. 21–26). ACM Press.

[27]

Hu, Y., Chen, X., & Yang, D. (2009, October 26–30). Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In Proceedings of the 10th International Society for Music Information Retrieval Conference (pp. 123–128).

[28]

Jiang, D. N., Lu, L., Zhang, H. J., Tao, J. H., & Cai, L. H. (2002). Music type classification by spectral contrast feature. In Proceedings IEEE international conference on multimedia and expo (Vol. 1, pp. 113–116). IEEE Press.

[29]

Jo, W., & Kim, M. J. (2022). Tracking emotions from song lyrics: Analyzing 30 years of K-pop hits. Emotion, 23, 1658–1669.

[30]

Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235–266.

[31]

Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238.

[32]

Kahng, A. B., Mantik, S., & Markov, I. L. (2002). Min-max placement for large-scale timing optimization. In Proceedings of the 2002 international symposium on physical design (pp. 143–148). ACM.

[33]

Kim, H. G., Kim, G. Y., & Kim, J. Y. (2019). Music recommendation system using human activity recognition from accelerometer data. IEEE Transactions on Consumer Electronics, 65(3), 349–358.

[34]

FitzGerald, D., & Paulus, J. (2007). Unpitched percussion transcription. In A. Klapuri & M. Davy (Eds.), Signal processing methods for music transcription (pp. 131–162). Springer Science & Business Media.

[35]

Kolinsky, R., Lidji, P., Peretz, I., Besson, M., & Morais, J. (2009). Processing interactions between phonology and melody: Vowels sing but consonants speak. Cognition, 112(1), 1–20.

[36]

Laurier, C., Grivolla, J., & Herrera, P. (2008). Multimodal music mood classification using audio and lyrics. In 2008 seventh international conference on machine learning and applications (pp. 688–693). IEEE.

[37]

Ma, Y., Baker, D. J., Vukovics, K. M., Davis, C. J., & Elliott, E. (2023). Lyrics and melodies: Do both affect emotions equally? A replication and extension of Ali and Peynircioğlu (2006). Musicae Scientiae, 28, 174–186.

[38]

Magron, P., & Févotte, C. (2021). Leveraging the structure of musical preference in content-aware music recommendation. In ICASSP 2021—2021 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 581–585). IEEE.

[39]

Maroely, R., & Munichor, N. (2022). Music to the individual consumer’s ears: How and why does personalizing music in advertising enhance viewing duration and ad effectiveness? International Journal of Advertising, 42(4), 682–712.

[40]

McFee, B., Kim, J. W., Cartwright, M., Salamon, J., Bittner, R., & Bello, J. P. (2019). Open source practices for music signal processing research. IEEE Signal Processing Magazine, 36(1), 128–137.

[41]

McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Battenberg, E., & Nieto, O. (2015, July 6–12). librosa: Audio and music signal analysis in Python. In Proceedings of the 14th Python in science conference (Vol. 8), Austin, US. https://conference.scipy.org/proceedings/scipy2015/pdfs/brian_mcfee.pdf

[42]

McLachlan, N., Marco, D., Light, M., & Wilson, S. (2013). Consonance and pitch. Journal of Experimental Psychology: General, 142, 1142–1158.

[43]

Mehr, S. A., Singh, M., Knox, D., Ketter, D. M., Pickens-Jones, D., Atwood, S., Lucas, C., Jacoby, N., Egner, A. A., Hopkins, E. J., Howard, R. M., Hartshorne, J. K., Jennings, M. V., Simson, J., Bainbridge, C. M., Pinker, S., O’Donnell, T. J., Krasnow, M. M., & Glowacki, L. (2019). Universality and diversity in human song. Science, 366(6468), eaax0868.

[44]

Mori, K. (2022). Decoding peak emotional responses to music from computational acoustic and lyrical features. Cognition, 222, 105010.

[45]

Nagathil, A., Schlattmann, J. W., Neumann, K., & Martin, R. (2018). Music complexity prediction for cochlear implant listeners based on a feature-based linear regression model. The Journal of the Acoustical Society of America, 144(1), 1–10.

[46]

Ndraha, L. D. M. (2018). The analysis of metaphor in Westlife’s song lyrics. Jurnal Education and Development, 3(1), 79.

[47]

Panda, R., Malheiro, R., & Paiva, R. P. (2020). Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing, 11(4), 614–626.

[48]

Peng, H., Cambria, E., & Hussain, A. (2017). A review of sentiment analysis research in Chinese language. Cognitive Computation, 9(4), 423–435.

[49]

Pesek, M., Strle, G., Kavčič A., & Marolt, M. (2017). The Moodo dataset: Integrating user context with emotional and color perception of music for affective music information retrieval. Journal of New Music Research, 46(3), 246–260.

[50]

Petrie, K. J., Pennebaker, J. W., & Sivertsen, B. (2008). Things we said today: A linguistic analysis of the Beatles. Psychology of Aesthetics, Creativity, and the Arts, 2(4), 197–202.

[51]

Proverbio, A. M., De Benedetto, F., & Guazzone, M. (2020). Shared neural mechanisms for processing emotions in music and vocalizations. European Journal of Neuroscience, 51(9), 1987–2007.

[52]

Rigg, M. G. (1964). The mood effects of music: A comparison of data from four investigators. The Journal of Psychology, 58(2), 427–438.

[53]

Savage, P. E. (2019). Cultural evolution of music. Palgrave Communications, 5(1), 1–12.

[54]

Schulenberg, D. (1995). "Musical allegory" reconsidered: Representation and imagination in the baroque. The Journal of Musicology, 13(2), 203–239.

[55]

Serafine, M. L., Crowder, R. G., & Repp, B. H. (1984). Integration of melody and text in memory for songs. Cognition, 16(3), 285–303.

[56]

Serafine, M. L., Davidson, J., Crowder, R. G., & Repp, B. H. (1986). On the nature of melody-text integration in memory for songs. Journal of Memory and Language, 25(2), 123–135.

[57]

Strobl, C., Malley, J., & Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14(4), 323–348.

[58]

Temperley, D. (2004). The cognition of basic musical structures. MIT Press.

[59]

Vásquez-Leon, M., & Ugarte, W. (2020). Sentiment analysis of song lyrics using clustering. In Brazilian technology symposium (pp. 342–350). Springer International Publishing.

[60]

Wen, X., Huang, Z., Sun, Z., & Xu, L. (2022). What a deep song: The role of music features in perceived depth. PsyCh Journal, 11(5), 673–683.

[61]

Xiao, H., Downie, J. S., Laurier, C., Bay, M., & Ehmann, A. F. (2008). The 2007 MIREX audio mood classification task: Lessons learned. In 9th international conference on music information retrieval (pp. 14–18). Drexel University.

[62]

Xu, L., Sun, Z., Wen, X., Huang, Z., Chao, C., & Xu, L. (2021). Using machine learning analysis to interpret the relationship between music emotion and lyric features. PeerJ Computer Science, 7, e785.

[63]

Xu, L., Wen, X., Shi, J., Li, S., Xiao, Y., Wan, Q., & Qian, X. (2021). Effects of individual factors on perceived emotion and felt emotion of music: Based on machine learning methods. Psychology of Music, 49(5), 1069–1087.

[64]

Xu, L., Xu, M., Jiang, Z., Wen, X., Liu, Y., Sun, Z., Li, H., & Qian, X. (2023). How have music emotions been described in Google books? Historical trends and corpus differences. Humanities and Social Sciences Communications, 10(1), 346.

[65]

Xu, L., Yun, Z., Sun, Z., Wen, X., Qin, X., & Qian, X. (2022). PSIC3839: Predicting the overall emotion and depth of entire songs. In Design studies and intelligence engineering (pp. 1–9). IOS Press.

[66]

Xu, L., Zeng, S., Jiang, Z., Sun, Z., Li, H., & Xu, L. (2023). From peaks to people: The association between physical topography and generalized trust in China. Journal of Environmental Psychology, 91, 102136.

[67]

Xu, L., Zheng, Y., Xu, D., & Xu, L. (2021). Predicting the preference for sad music: The role of gender, personality, and audio features. IEEE Access, 9, 92952–92963.

[68]

Yang, X., Dong, Y., & Li, J. (2018). Review of data features-based music emotion recognition methods. Multimedia Systems, 24(4), 365–389.

[69]

Yu, Y., Wu, D., Zhang, J. X., & Fang, P. (2019). Lyrics only or lyrics with music? The effect of different lyric conditions on prosocial-related outcomes. PsyCh Journal, 8(4), 503–512.

[70]

Zhao, N., Jiao, D., Bai, S., & Zhu, T. (2016). Evaluating the validity of simplified Chinese version of LIWC in detecting psychological expressions in short texts on social network services. PLoS One, 11(6), e0157947.

RIGHTS & PERMISSIONS

2024 The Author(s). PsyCh Journal published by Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

AI Summary AI Mindmap
PDF

133

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/