TY - JOUR A1 - Grimm, Volker A1 - Berger, Uta T1 - Robustness analysis: Deconstructing computational models for ecological theory and applications JF - Ecological modelling : international journal on ecological modelling and engineering and systems ecolog N2 - The design of computational models is path-dependent: the choices made in each step during model development constrain the choices that are available in the subsequent steps. The actual path of model development can be extremely different, even for the same system, because the path depends on the question addressed, the availability of data, and the consideration of specific expert knowledge, in addition to the experience, background, and modelling preferences of the modellers. Thus, insights from different models are practically impossible to integrate, which hinders the development of general theory. We therefore suggest augmenting the current culture of communicating models as working just fine with a culture of presenting analyses in which we try to break models, i.e., model mechanisms explaining certain observations break down. We refer to the systematic attempts to break a model as “robustness analysis” (RA). RA is the systematic deconstruction of a model by forcefully changing the model's parameters, structure, and representation of processes. We discuss the nature and elements of RA and provide brief examples. RA cannot be completely formalized into specific techniques and instead corresponds to detective work that is driven by general questions and specific hypotheses, with strong attention focused on unusual behaviours. Both individual modellers and ecological modelling in general will benefit from RA because RA helps with understanding models and identifying “robust theories”, which are general principles that are independent of the idiosyncrasies of specific models. Integrating the results of RAs from different models to address certain systems or questions will then provide a comprehensive overview of when certain mechanisms control system behaviour and when and why this control ceases. This approach can provide insights into the mechanisms that lead to regime shifts in actual ecological systems. KW - Sensitivity analysis KW - Ecological theory KW - Computational modelling KW - Robustness KW - Model analysis KW - Understanding Y1 - 2016 U6 - https://doi.org/10.1016/j.ecolmodel.2015.07.018 SN - 0304-3800 SN - 1872-7026 VL - 326 SP - 162 EP - 167 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Logacev, Pavel A1 - Vasishth, Shravan T1 - Understanding underspecification: A comparison of two computational implementations JF - The quarterly journal of experimental psychology N2 - Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998 Traxler, M. J., Pickering, M. J., & Clifton, C. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592. doi: 10.1006/jmla.1998.2600[CrossRef], [Web of Science ®], [Google Scholar]). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods. KW - Computational modelling KW - Underspecification KW - Shallow processing Y1 - 2016 U6 - https://doi.org/10.1080/17470218.2015.1134602 SN - 1747-0218 SN - 1747-0226 VL - 69 SP - 996 EP - 1012 PB - BioMed Central CY - Abingdon ER - TY - GEN A1 - Logačev, Pavel A1 - Vasishth, Shravan T1 - Understanding underspecification BT - A comparison of two computational implementations N2 - Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods. T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 295 KW - Computational modelling KW - Underspecification KW - Shallow processing Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-93441 SP - 996 EP - 1012 ER -