Categories


Authors

Humlab Talk: How serious and silly comments interactively construct nationalism on Reddit

Thursday 11th of May 15:00-17.00 At Humlab/Zoom

On the 11th of May, Humlab and the Department of Media and Communication studies, will host a talk on How serious and silly comments interactively construct nationalism on Reddit by Tommy Bruhn, University of Copenhagen & Joanna Doona, Lund University

Abstract

The internet forum and social news site Reddit houses a substantial number of topics, interests, and forms. Humour is prevalent on the platform, used in all manners of situations. However, despite humour’s ubiquity, research still tends to study it isolation, artificially separating it from other aspects of communication and discourse, cementing it as something potentially problematic in civic discourse – as obscuring or blocking of seriousness. In our presentation, we will argue that seriousness and silliness are much more integrated than this, and dependent upon each other, as well as contextually bound by social, material, and cultural factors, including platform culture and logics. Through a qualitative contextualizing case study that combines concepts and methods from humour studies, media studies and rhetoric, we study civic discourse about Sweden’s potential NATO membership in the subreddit “r/Sweden” – including text, memes, and other forms of joking. Our interest is two-fold: we study 1) how Swedish NATO membership is debated through both humorous and serious modes, focussing on how these modes interact, in order to 2) understand how such interaction construct nationalism through different forms. Through the identification of themes relating to a) Sweden as a small country in a bigger world, b) the use of fantasies, folly and abstraction, and c) joking and ironic constructions of Swedishness; we begin to sketch out the qualities of this joking and ironic form of nationalist discourse and what distinguishes it from other forms of nationalism.

For more information and registration, click [here]

Workshop on AI and human autonomy

Olle Häggström: Large language models, AI risk and AI alignment