Refine
Year of publication
Language
- English (78)
Is part of the Bibliography
- yes (78)
Keywords
- individual-based model (8)
- Individual-based model (7)
- Climate change (3)
- Risk assessment (3)
- Stage-based model (3)
- biodiversity (3)
- coexistence (3)
- foraging (3)
- intraspecific trait variation (3)
- modeling (3)
Biodiversity loss is a result of interacting ecological and economic factors, and it must be addressed through an analysis of biodiversity conservation policies. Ecological-economic modelling is a helpful approach to this analysis, but it is also challenging since modellers often have a specific disciplinary background and tend to misrepresent either the ecological or economic aspects. Here, we introduce some of the most important concepts from both disciplines, and since the two modelling cultures also differ between the two disciplines, we present an integrated, consistent guide through all the steps of generic ecological-economic modelling, such as formulation of the research question, development of the conceptual model, model parametrisation and analysis, and interpretation of model results. Although we focus on generic models aimed at a general understanding of causes and remedies for biodiversity loss, the concepts and guidance provided here may also help in the modelling of more specific conservation problems. This guide is aimed at the intersection of three disciplines: ecology, economics and mathematical modelling, and addresses readers who have some knowledge in at least one of these disciplines and want to learn about the others to build and analyse generic ecological-economic models. Compared to textbooks, the guide focuses on the practice of modelling rather than lengthy explanations of theoretical concepts. We attempt to demonstrate that generic ecological-economic modelling does not require magical powers and instead is a manageable exercise.
Anthropogenic pressures increasingly alter natural systems. Therefore, understanding the resilience of agent-based complex systems such as ecosystems, i.e. their ability to absorb these pressures and sustain their functioning and services, is a major challenge. However, the mechanisms underlying resilience are still poorly understood. A main reason for this is the multidimensionality of both resilience, embracing the three fundamental stability properties recovery, resistance and persistence, and of the specific situations for which stability properties can be assessed. Agent-based models (ABM) complement empirical research which is, for logistic reasons, limited in coping with these multiple dimensions. Besides their ability to integrate multidimensionality through extensive manipulation in a fully controlled system, ABMs can capture the emergence of system resilience from individual interactions and feedbacks across different levels of organization. To assess the extent to which this potential of ABMs has already been exploited, we reviewed the state of the art in exploring resilience and its multidimensionality in ecological and socio-ecological systems with ABMs. We found that the potential of ABMs is not utilized in most models, as they typically focus on a single dimension of resilience by using variability as a proxy for persistence, and are limited to one reference state, disturbance type and scale. Moreover, only few studies explicitly test the ability of different mechanisms to support resilience. To overcome these limitations, we recommend to simultaneously assess multiple stability properties for different situations and under consideration of the mechanisms that are hypothesised to render a system resilient. This will help us to better exploit the potential of ABMs to understand and quantify resilience mechanisms, and hence support solving real-world problems related to the resilience of agent-based complex systems.
When data are limited it is difficult for conservation managers to assess alternative management scenarios and make decisions. The natterjack toad (Bufo calamita) is declining at the edges of its distribution range in Europe and little is known about its current distribution and abundance in Poland. Although different landscape management plans for central Poland exist, it is unclear to what extent they impact this species. Based on these plans, we investigated how four alternative landscape development scenarios would affect the total carrying capacity and population dynamics of the natterjack toad. To facilitate decision-making, we first ranked the scenarios according to their total carrying capacity. We used the software RAMAS GIS to determine the size and location of habitat patches in the landscape. The estimated carrying capacities were very similar for each scenario, and clear ranking was not possible. Only the reforestation scenario showed a marked loss in carrying capacity. We therefore simulated metapopulation dynamics with RAMAS taking into account dynamical processes such as reproduction and dispersal and ranked the scenarios according to the resulting species abundance. In this case, we could clearly rank the development scenarios. We identified road mortality of adults as a key process governing the dynamics and separating the different scenarios. The renaturalisation scenario clearly ranked highest due to its decreased road mortality. Taken together our results suggest that road infrastructure development might be much more important for natterjack toad conservation than changes in the amount of habitat in the semi-natural river valley. We gained these insights by considering both the resulting metapopulation structure and dynamics in the form of a PVA. We conclude that the consideration of dynamic processes in amphibian conservation management may be indispensable for ranking management scenarios.
Current environmental risk assessment (ERA) of chemicals for aquatic invertebrates relies on standardized laboratory tests in which toxicity effects on individual survival, growth and reproduction are measured. Such tests determine the threshold concentration of a chemical below which no population-level effects are expected. How well this procedure captures effects on individuals and populations, however, remains an open question. Here we used mechanistic effect models, combining individual-level reproduction and survival models with an individual-based population model (IBM), to understand the individuals' responses and extrapolate them to the population level. We used a toxicant (Dispersogen A) for which adverse effects on laboratory populations were detected at the determined threshold concentration and thus challenged the conservatism of the current risk assessment method. Multiple toxicity effects on reproduction and survival were reported, in addition to effects on the F1 generation. We extrapolated commonly tested individual toxicity endpoints, reproduction and survival, to the population level using the IBM. Effects on reproduction were described via regression models. To select the most appropriate survival model, the IBM was run assuming either stochastic death (SD) or individual tolerance (IT). Simulations were run for different scenarios regarding the toxicant's effects: survival toxicity, reproductive toxicity, or survival and reproductive toxicity. As population-level endpoints, we used population size and structure and extinction risk. We found that survival represented as SD explained population dynamics better than IT. Integrating toxicity effects on both reproduction and survival yielded more accurate predictions of population effects than considering isolated effects. To fully capture population effects observed at high toxicant concentrations, toxicity effects transmitted to the F1 generation had to be integrated. Predicted extinction risk was highly sensitive to the assumptions about individual-level effects. Our results demonstrate that the endpoints used in current standard tests may not be sufficient for assessing the risk of adverse effects on populations. A combination of laboratory population experiments with mechanistic effect models is a powerful tool to better understand and predict effects on both individuals and populations. Mechanistic effect modelling thus holds great potential to improve the accuracy of ERA of chemicals in the future. (C) 2013 The Authors. Published by Elsevier B.V. All rights reserved.
In addition to natural stressors, populations are increasingly exposed to chemical pollutants released into the environment. We experimentally demonstrate the loss of resilience for Daphnia magna populations that are exposed to a combination of natural and chemical stressors even though effects on population size of a single stressor were cryptic, i.e. hard to detect statistically. Data on Daphnia population demography and along with model-based exploration of our predator-prey system revealed that direct trophic interactions changed the population size-structure and thereby increased population vulnerability to the toxicant which acts in a size selective manner. Moreover, population vulnerability to the toxicant increases with predator size and predation intensity whereas indirect trait-mediated interactions via predator kairomones may buffer chemical effects to a certain extent. Our study demonstrates that population size can be a poor endpoint for risk assessments of chemicals and that ignoring disturbance interactions can lead to severe underestimation of extinction risk.
The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model 'evaludation': the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples. (C) 2014 Elsevier B.V. All rights reserved.
Robustness analysis: Deconstructing computational models for ecological theory and applications
(2016)
The design of computational models is path-dependent: the choices made in each step during model development constrain the choices that are available in the subsequent steps. The actual path of model development can be extremely different, even for the same system, because the path depends on the question addressed, the availability of data, and the consideration of specific expert knowledge, in addition to the experience, background, and modelling preferences of the modellers. Thus, insights from different models are practically impossible to integrate, which hinders the development of general theory. We therefore suggest augmenting the current culture of communicating models as working just fine with a culture of presenting analyses in which we try to break models, i.e., model mechanisms explaining certain observations break down. We refer to the systematic attempts to break a model as “robustness analysis” (RA). RA is the systematic deconstruction of a model by forcefully changing the model's parameters, structure, and representation of processes. We discuss the nature and elements of RA and provide brief examples. RA cannot be completely formalized into specific techniques and instead corresponds to detective work that is driven by general questions and specific hypotheses, with strong attention focused on unusual behaviours. Both individual modellers and ecological modelling in general will benefit from RA because RA helps with understanding models and identifying “robust theories”, which are general principles that are independent of the idiosyncrasies of specific models. Integrating the results of RAs from different models to address certain systems or questions will then provide a comprehensive overview of when certain mechanisms control system behaviour and when and why this control ceases. This approach can provide insights into the mechanisms that lead to regime shifts in actual ecological systems.
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers. (c) 2006 Elsevier B.V. All rights reserved.
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity