Refine
Has Fulltext
- no (27) (remove)
Year of publication
Document Type
- Article (24)
- Monograph/Edited Volume (2)
- Other (1)
Is part of the Bibliography
- yes (27)
Keywords
- Linguistic annotation (2)
- connective (2)
- Annotation tools (1)
- Argument Mining (1)
- Argumentation structure (1)
- Coherence relation (1)
- Conflicting tokenizations (1)
- Connective (1)
- Corpus linguistics (1)
- Illocutionary force (1)
Argument mining on twitter
(2021)
In the last decade, the field of argument mining has grown notably. However, only relatively few studies have investigated argumentation in social media and specifically on Twitter. Here, we provide the, to our knowledge, first critical in-depth survey of the state of the art in tweet-based argument mining. We discuss approaches to modelling the structure of arguments in the context of tweet corpus annotation, and we review current progress in the task of detecting argument components and their relations in tweets. We also survey the intersection of argument mining and stance detection, before we conclude with an outlook.
Argumentation mining is a subfield of Computational Linguistics that aims (primarily) at automatically finding arguments and their structural components in natural language text. We provide a short introduction to this field, intended for an audience with a limited computational background. After explaining the subtasks involved in this problem of deriving the structure of arguments, we describe two other applications that are popular in computational linguistics: sentiment analysis and stance detection. From the linguistic viewpoint, they concern the semantics of evaluation in language. In the final part of the paper, we briefly examine the roles that these two tasks play in argumentation mining, both in current practice, and in possible future systems.
Coherence relations are typically taken to link two clauses or larger units and to be signaled at the text surface by conjunctions and certain adverbials. Relations, however, also can hold within clauses, indicated by prepositions like despite, due to, or in case of, when these have an internal argument denoting an eventuality. Although these prepositions act as reliable cues to indicate a specific relation, others are lexically more neutral. We investigated this situation for the German preposition bei, which turns out to be highly ambiguous. We demonstrate the range of readings in a corpus study, proposing 6 more specific prepositions as a comprehensive substitution set. All these uses of bei share a common kernel meaning, which is missed by the standard accounts that assume lexical polysemy. We examine the range of coherence relations that can be signaled by bei and provide some factors here supporting the disambiguation task in a framework of discourse interpretation
Parsing of argumentative structures has become a very active line of research in recent years. Like discourse parsing or any other natural language task that requires prediction of linguistic structures, most approaches choose to learn a local model and then perform global decoding over the local probability distributions, often imposing constraints that are specific to the task at hand. Specifically for argumentation parsing, two decoding approaches have been recently proposed: Minimum Spanning Trees (MST) and Integer Linear Programming (ILP), following similar trends in discourse parsing. In contrast to discourse parsing though, where trees are not always used as underlying annotation schemes, argumentation structures so far have always been represented with trees. Using the ‘argumentative microtext corpus’ [in: Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, Lisbon 2015 / Vol. 2, College Publications, London, 2016, pp. 801–815] as underlying data and replicating three different decoding mechanisms, in this paper we propose a novel ILP decoder and an extension to our earlier MST work, and then thoroughly compare the approaches. The result is that our new decoder outperforms related work in important respects, and that in general, ILP and MST yield very similar performance.
Newspaper text can be broadly divided in the classes ‘opinion’ (editorials, commentary, letters to the editor) and ‘neutral’ (reports). We describe a classification system for performing this separation, which uses a set of linguistically motivated features. Working with various English newspaper corpora, we demonstrate that it significantly outperforms bag-of-lemma and PoS-tag models. We conclude that the linguistic features constitute the best method for achieving robustness against change of newspaper or domain.