Refine
Document Type
- Article (2)
- Doctoral Thesis (1)
Language
- English (3) (remove)
Is part of the Bibliography
- yes (3)
Keywords
- discourse analysis (3) (remove)
Institute
Purpose: The existing body of work regarding discourse coherence in aphasia has provided mixed results, leaving the question of coherence being impaired or intact as a result of brain injury unanswered. In this study, discourse coherence in non-brain-damaged (NBD) speakers and speakers with anomic aphasia was investigated quantitatively and qualitatively. Method: Fifteen native speakers of Cantonese with anomic aphasia and 15 NBD participants produced 60 language samples. Elicitation tasks included story-telling induced by a picture series and a procedural description. The samples were annotated for discourse structure in the framework of Rhetorical Structure Theory (RST) in order to analyse a number of structural parameters. After that 20 naive listeners rated coherence of each sample. Result: Disordered discourse was rated as significantly less coherent. The NBD group demonstrated a higher production fluency than the participants with aphasia and used a richer set of semantic relations to create discourse, particularly in the description of settings, expression of causality, and extent of elaboration. People with aphasia also tended to omit essential information content. Conclusion: Reduced essential information content, lower degree of elaboration, and a larger amount of structural disruptions may have contributed to the reduced overall discourse coherence in speakers with anomic aphasia.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
In dealing with surveillance, scholars have widely agreed to refute privacy as an analytical concept and defining theme. Nonetheless, in public debates, surveillance technologies are still confronted with issues of privacy, and privacy therefore endures as an empirical subject of research on surveillance. Drawing from our analysis of public discourse of so-called smart closed-circuit television (CCTV) in Germany, we propose to use a sociology of knowledge perspective to analyze privacy in order to understand how it is socially constructed and negotiated. Our data comprise 117 documents, covering all publicly available documents between 2006 and 2010 that we were able to obtain. We found privacy to be the only form of critique in the struggle for the legitimate definition of smart CCTV. In this paper, we discuss the implications our preliminary findings have for the relationship between privacy issues and surveillance technology and conclude with suggestions of how this relationship might be further investigated as paradoxical, yet constitutive.