Let’s talk about CS!
(2015)
To communicate about a science is the most important key
competence in education for any science. Without communication we
cannot teach, so teachers should reflect about the language they use in
class properly. But the language students and teachers use to communicate
about their CS courses is very heterogeneous, inconsistent and
deeply influenced by tool names. There is a big lack of research and
discussion in CS education regarding the terminology and the role of
concepts and tools in our science. We don’t have a consistent set of
terminology that we agree on to be helpful for learning our science.
This makes it nearly impossible to do research on CS competencies as
long as we have not agreed on the names we use to describe these. This
workshop intends to provide room to fill with discussion and first ideas
for future research in this field.
Confusion about model validation is one of the main challenges in using ecological models for decision support, such as the regulation of pesticides. Decision makers need to know whether a model is a sufficiently good representation of its real counterpart and what criteria can be used to answer this question. Unclear terminology is one of the main obstacles to a good understanding of what model validation is, how it works, and what it can deliver. Therefore, we performed a literature review and derived a standard set of terms. 'Validation' was identified as a catch-all term, which is thus useless for any practical purpose. We introduce the term 'evaludation', a fusion of 'evaluation' and 'validation', to describe the entire process of assessing a model's quality and reliability. Considering the iterative nature of model development, the modelling cycle, we identified six essential elements of evaludation: (i) 'data evaluation' for scrutinising the quality of numerical and qualitative data used for model development and testing; (ii) 'conceptual model evaluation' for examining the simplifying assumptions underlying a model's design; (iii) 'implementation verification' for testing the model's implementation in equations and as a computer programme; (iv) 'model output verification' for comparing model output to data and patterns that guided model design and were possibly used for calibration; (v) 'model analysis' for exploring the model's sensitivity to changes in parameters and process formulations to make sure that the mechanistic basis of main behaviours of the model has been well understood; and (vi) 'model output corroboration' for comparing model output to new data and patterns that were not used for model development and parameterisation. Currently, most decision makers require 'validating' a model by testing its predictions with new experiments or data. Despite being desirable, this is neither sufficient nor necessary for a model to be useful for decision support. We believe that the proposed set of terms and its relation to the modelling cycle can help to make quality assessments and reality checks of ecological models more comprehensive and transparent. (C) 2013 Elsevier B.V. All rights reserved.