Refine
Has Fulltext
- no (55)
Year of publication
Document Type
- Article (55) (remove)
Language
- English (55)
Is part of the Bibliography
- yes (55)
Keywords
- Individual-based model (7)
- individual-based model (6)
- Climate change (3)
- Stage-based model (3)
- Folsomia candida (2)
- Model complexity (2)
- Pattern-oriented modelling (2)
- Population dynamics (2)
- Population viability analysis (2)
- Risk assessment (2)
Institute
- Institut für Biochemie und Biologie (55) (remove)
Current chemical risk assessment procedures may result in imprecise estimates of risk due to sometimes arbitrary simplifying assumptions. As a way to incorporate ecological complexity and improve risk estimates, mechanistic effect models have been recommended. However, effect modeling has not yet been extensively used for regulatory purposes, one of the main reasons being uncertainty about which model type to use to answer specific regulatory questions. We took an individual-based model (IBM), which was developed for risk assessment of soil invertebrates and includes avoidance of highly contaminated areas, and contrasted it with a simpler, more standardized model, based on the generic metapopulation matrix model RAMAS. In the latter the individuals within a sub-population are not treated as separate entities anymore and the spatial resolution is lower. We explored consequences of model aggregation in terms of assessing population-level effects for different spatial distributions of a toxic chemical. For homogeneous contamination of the soil, we found good agreement between the two models, whereas for heterogeneous contamination, at different concentrations and percentages of contaminated area, RAMAS results were alternatively similar to IBM results with and without avoidance, and different food levels. This inconsistency is explained on the basis of behavioral responses that are included in the IBM but not in RAMAS. Overall, RAMAS was less sensitive than the IBM in detecting population-level effects of different spatial patterns of exposure. We conclude that choosing the right model type for risk assessment of chemicals depends on whether or not population-level effects of small-scale heterogeneity in exposure need to be detected. We recommend that if in doubt, both model types should be used and compared. Describing both models following the same standard format, the ODD protocol, makes them equally transparent and understandable. The simpler model helps to build up trust for the more complex model and can be used for more homogeneous exposure patterns. The more complex model helps detecting and understanding the limitations of the simpler model and is needed to ensure ecological realism for more complex exposure scenarios. (C) 2013 Elsevier B.V. All rights reserved.
The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model 'evaludation': the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples. (C) 2014 Elsevier B.V. All rights reserved.
Females may select a mate based on signalling traits that are believed to accurately correlate with heritable aspects of male quality. Anthropogenic actions, in particular chemicals released into the environment, are now disrupting the accuracy of mating signals to convey information about male quality. The long-term prediction for disrupted mating signals is most commonly loss of female preference. Yet, this prediction has rarely been tested using quantitative models. We use agent-based models to explore the effects of rapid disruption of mating signals. In our model, a gene determines survival. Males signal their level of genetic quality via a signal trait, which females use to select a mate. We allowed this system of sexual selection to become established, before introducing a disruption between the male signal trait and quality, which was similar in nature to that induced by exogenous chemicals. Finally, we assessed the capacity of the system to recover from this disruption. We found that within a relatively short time frame, disruption of mating signals led to a lasting loss of female preference. Decreases in mean viability at the population-level were also observed, because sexual-selection acting against newly arising deleterious mutations was relaxed. The ability of the population to recover from disrupted mating signals was strongly influenced by the mechanisms that promoted or maintained genetic diversity in traits under sexual selection. Our simple model demonstrates that environmental perturbations to the accuracy of male mating signals can result in a long-term loss of female preference for those signals within a few generations. What is more, the loss of this preference can have knock-on consequences for mean population fitness.
Ecosystems respond in various ways to disturbances. Quantifying ecological stability therefore requires inspecting multiple stability properties, such as resistance, recovery, persistence and invariability. Correlations among these properties can reduce the dimensionality of stability, simplifying the study of environmental effects on ecosystems. A key question is how the kind of disturbance affects these correlations. We here investigated the effect of three disturbance types (random, species-specific, local) applied at four intensity levels, on the dimensionality of stability at the population and community level. We used previously parameterized models that represent five natural communities, varying in species richness and the number of trophic levels. We found that disturbance type but not intensity affected the dimensionality of stability and only at the population level. The dimensionality of stability also varied greatly among species and communities. Therefore, studying stability cannot be simplified to using a single metric and multi-dimensional assessments are still to be recommended.
Population viability analysis (PVA) models are used to estimate population extinction risk under different scenarios. Both simple and complex PVA models are developed and have their specific pros and cons; the question therefore arises whether we always use the most appropriate model type. Generally, the specific purpose of a model and the availability of data are listed as determining the choice of model type, but this has not been formally tested yet. We quantified the relative importance of model purpose and nine metrics of data availability and resolution for the choice of a PVA model type, while controlling for effects of the different life histories of the modelled species. We evaluated 37 model pairs: each consisting of a generally simpler, population-based model (PBM) and a more complex, individual-based model (IBM) developed for the same species. The choice of model type was primarily affected by the availability and resolution of demographic, dispersal and spatial data. Low-resolution data resulted in the development of less complex models. Model purpose did not affect the choice of the model type. We confirm the general assumption that poor data availability is the main reason for the wide use of simpler models, which may have limited predictive power for population responses to changing environmental conditions. Conservation biology is a crisis discipline where researchers learned to work with the data at hand. However, for threatened and poorly-known species, there is no short-cut when developing either a PBM or an IBM: investments to collect appropriately detailed data are required to ensure PVA models can assess extinction risk under complex environmental conditions. (C) 2015 Elsevier B.V. All rights reserved.
Both dispersal and local demographic processes determine a population's distribution among habitats of varying quality, yet most theory, experiments, and field studies have focused on the former. We use a generic model to show how both processes contribute to a population's distribution, and how the relative importance of each mechanism depends on scale. In contrast to studies only considering habitat-dependent dispersal, we show that predictions of ideal free distribution (IFD) theory are relevant even at landscape scales, where the assumptions of IFD theory are violated. This is because scales that inhibit one process, promote the other's ability to drive populations to the IFD. Furthermore, because multiple processes can generate IFDs, the pattern alone does not specify a causal mechanism. This is important because populations with IFDs generated by dispersal or demography respond much differently to shifts in resource distributions.
Robustness analysis: Deconstructing computational models for ecological theory and applications
(2016)
The design of computational models is path-dependent: the choices made in each step during model development constrain the choices that are available in the subsequent steps. The actual path of model development can be extremely different, even for the same system, because the path depends on the question addressed, the availability of data, and the consideration of specific expert knowledge, in addition to the experience, background, and modelling preferences of the modellers. Thus, insights from different models are practically impossible to integrate, which hinders the development of general theory. We therefore suggest augmenting the current culture of communicating models as working just fine with a culture of presenting analyses in which we try to break models, i.e., model mechanisms explaining certain observations break down. We refer to the systematic attempts to break a model as “robustness analysis” (RA). RA is the systematic deconstruction of a model by forcefully changing the model's parameters, structure, and representation of processes. We discuss the nature and elements of RA and provide brief examples. RA cannot be completely formalized into specific techniques and instead corresponds to detective work that is driven by general questions and specific hypotheses, with strong attention focused on unusual behaviours. Both individual modellers and ecological modelling in general will benefit from RA because RA helps with understanding models and identifying “robust theories”, which are general principles that are independent of the idiosyncrasies of specific models. Integrating the results of RAs from different models to address certain systems or questions will then provide a comprehensive overview of when certain mechanisms control system behaviour and when and why this control ceases. This approach can provide insights into the mechanisms that lead to regime shifts in actual ecological systems.
Grazing is known as one of the key factors for diversity and community composition in grassland ecosystems, but the response of plant communities towards grazing varies remarkably between sites with different environmental conditions. It is generally accepted that grazing increases plant diversity in productive environments, while it tends to reduce diversity in unproductive habitats (grazing reversal hypothesis). Despite empirical evidence for this pattern the mechanistic link between modes of plant-plant competition and grazing response at the community level still remains poorly understood. Root-competition in particular has rarely been included in theoretical studies, although it has been hypothesized that variations in productivity and grazing regime can alter the relative importance of shoot- and root-competition. We therefore developed an individual-based model based on plant functional traits to investigate the response of a grassland community towards grazing. Models of different complexity, either incorporating only shoot competition or with distinct shoot- and root-competition, were used to study the interactive effects of grazing, resource availability, and the mode of competition (size-symmetric or asymmetric). The pattern predicted by the grazing reversal hypothesis (GRH) can only be explained by our model if shoot- and root-competition are explicitly considered and if size asymmetry of above- and symmetry of below-ground competition is assumed. For this scenario, the model additionally reproduced empirically observed plant trait responses: erect and large plant functional types (PFTs) dominated without grazing, while frequent grazing favoured small PFTs with a rosette growth form. We conclude that interactions between shoot- and root-competition and size symmetry/asymmetry of plant-plant interactions are crucial in order to understand grazing response under different habitat productivities. Our results suggest that future empirical trait surveys in grassland communities should include root traits, which have been largely ignored in previous studies, in order to improve predictions of plants" responses to grazing.
Resilience trinity
(2020)
Ensuring ecosystem resilience is an intuitive approach to safeguard the functioning of ecosystems and hence the future provisioning of ecosystem services (ES). However, resilience is a multi-faceted concept that is difficult to operationalize. Focusing on resilience mechanisms, such as diversity, network architectures or adaptive capacity, has recently been suggested as means to operationalize resilience. Still, the focus on mechanisms is not specific enough. We suggest a conceptual framework, resilience trinity, to facilitate management based on resilience mechanisms in three distinctive decision contexts and time-horizons: 1) reactive, when there is an imminent threat to ES resilience and a high pressure to act, 2) adjustive, when the threat is known in general but there is still time to adapt management and 3) provident, when time horizons are very long and the nature of the threats is uncertain, leading to a low willingness to act. Resilience has different interpretations and implications at these different time horizons, which also prevail in different disciplines. Social ecology, ecology and engineering are often implicitly focussing on provident, adjustive or reactive resilience, respectively, but these different notions of resilience and their corresponding social, ecological and economic tradeoffs need to be reconciled. Otherwise, we keep risking unintended consequences of reactive actions, or shying away from provident action because of uncertainties that cannot be reduced. The suggested trinity of time horizons and their decision contexts could help ensuring that longer-term management actions are not missed while urgent threats to ES are given priority.
There are two major limitations to the potential of computational models in ecology for producing general insights: their design is path-dependent, reflecting different underlying questions, assumptions, and data, and there is too little robustness analysis exploring where the model mechanisms explaining certain observations break down. We here argue that both limitations could be overcome if modellers in ecology would more often replicate existing models, try to break the models, and explore modifications. Replication comprises the re-implementation of an existing model and the replication of its results. Breaking models means to identify under what conditions the mechanisms represented in a model can no longer explain observed phenomena. The benefits of replication include less effort being spent to enter the iterative stage of model development and having more time for systematic robustness analysis. A culture of replication would lead to increased credibility, coherence and efficiency of computational modelling and thereby facilitate theory development.
When data are limited it is difficult for conservation managers to assess alternative management scenarios and make decisions. The natterjack toad (Bufo calamita) is declining at the edges of its distribution range in Europe and little is known about its current distribution and abundance in Poland. Although different landscape management plans for central Poland exist, it is unclear to what extent they impact this species. Based on these plans, we investigated how four alternative landscape development scenarios would affect the total carrying capacity and population dynamics of the natterjack toad. To facilitate decision-making, we first ranked the scenarios according to their total carrying capacity. We used the software RAMAS GIS to determine the size and location of habitat patches in the landscape. The estimated carrying capacities were very similar for each scenario, and clear ranking was not possible. Only the reforestation scenario showed a marked loss in carrying capacity. We therefore simulated metapopulation dynamics with RAMAS taking into account dynamical processes such as reproduction and dispersal and ranked the scenarios according to the resulting species abundance. In this case, we could clearly rank the development scenarios. We identified road mortality of adults as a key process governing the dynamics and separating the different scenarios. The renaturalisation scenario clearly ranked highest due to its decreased road mortality. Taken together our results suggest that road infrastructure development might be much more important for natterjack toad conservation than changes in the amount of habitat in the semi-natural river valley. We gained these insights by considering both the resulting metapopulation structure and dynamics in the form of a PVA. We conclude that the consideration of dynamic processes in amphibian conservation management may be indispensable for ranking management scenarios.
Individual-based models (IBMs) are increasingly used to link the dynamics of individuals to higher levels of biological organization. Still, many IBMs are data hungry, species specific, and time-consuming to develop and analyze. Many of these issues would be resolved by using general theories of individual dynamics as the basis for IBMs. While such theories have frequently been examined at the individual level, few cross-level tests exist that also try to predict population dynamics. Here we performed a cross-level test of dynamic energy budget (DEB) theory by parameterizing an individual-based model using individual-level data of the water flea, Daphnia magna, and comparing the emerging population dynamics to independent data from population experiments. We found that DEB theory successfully predicted population growth rates and peak densities but failed to capture the decline phase. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small-and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detect gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology.
Contamination of soil with toxic heavy metals poses a major threat to the environment and human health. Anthropogenic sources include smelting of ores, municipal wastes, fertilizers, and pesticides. In assessing soil quality and the environmental and ecological risk of contamination with heavy metals, often homogeneous contamination of the soil is assumed. However, soils are very heterogeneous environments. Consequently, both contamination and the response of soil organisms can be assumed to be heterogeneous. This might have consequences for the exposure of soil organisms and for the extrapolation of risk from the individual to the population level. Therefore, to explore how soil contamination of different spatial heterogeneity affects population dynamics of soil invertebrates, we developed a spatially explicit individual-based model of the springtail, Folsomia candida, a standard test species for ecotoxicological risk assessment. In the model, individuals were assumed to sense and avoid contaminated habitat with a certain probability that depends on contamination level. Avoidance of contaminated areas thus influenced the individuals' movement and feeding, their exposure, and in turn all other biological processes underlying population dynamics. Model rules and parameters were based on data from the literature, or were determined via pattern-oriented modelling. The model correctly predicted several patterns that were not used for model design and calibration. Simulation results showed that the ability of the individuals to detect and avoid the toxicant, combined with the presence of clean habitat patches which act as "refuges", made equilibrium population size due to toxic effects less sensitive to increases in toxicant concentration. Additionally, the level of heterogeneity among patches of soil (i.e. the difference in concentration) was important: at the same average concentration, a homogeneously contaminated scenario was the least favourable habitat, while higher levels of heterogeneity corresponded to higher population growth rate and equilibrium size. Our model can thus be used as a tool for extrapolating from short-term effects at the individual level to long-term effects at the population level under more realistic conditions. It can thus be used to develop and extrapolate from standard ecotoxicological tests in the laboratory to ecological risk assessments.
Metabolic scaling theory (MST) is an attempt to link physiological processes of individual organisms with macroecology. It predicts a power law relationship with an exponent of -4/3 between mean individual biomass and density during density-dependent mortality (self-thinning). Empirical tests have produced variable results, and the validity of MST is intensely debated. MST focuses on organisms' internal physiological mechanisms but we hypothesize that ecological interactions can be more important in determining plant mass-density relationships induced by density. We employ an individual-based model of plant stand development that includes three elements: a model of individual plant growth based on MST, different modes of local competition (size-symmetric vs. -asymmetric), and different resource levels. Our model is consistent with the observed variation in the slopes of self-thinning trajectories. Slopes were significantly shallower than -4/3 if competition was size-symmetric. We conclude that when the size of survivors is influenced by strong ecological interactions, these can override predictions of MST, whereas when surviving plants are less affected by interactions, individual-level metabolic processes can scale up to the population level. MST, like thermodynamics or biomechanics, sets limits within which organisms can live and function, but there may be stronger limits determined by ecological interactions. In such cases MST will not be predictive.
Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tic models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that arc included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.
Pattern-oriented modelling as a novel way to verify and validate functional-structural plant models
(2018)
Background and Aims Functional-structural plant (FSP) models have been widely used to understand the complex interactions between plant architecture and underlying developmental mechanisms. However, to obtain evidence that a model captures these mechanisms correctly, a clear distinction must be made between model outputs used for calibration and thus verification, and outputs used for validation. In pattern-oriented modelling (POM), multiple verification patterns are used as filters for rejecting unrealistic model structures and parameter combinations, while a second, independent set of patterns is used for validation. Key Results After calibration, our model simultaneously reproduced multiple observed architectural patterns. The model then successfully predicted, without further calibration, the validation patterns. The model supports the hypothesis that carbon allocation can be modelled as being dependent on current organ biomass and sink strength of each organ type, and also predicted the observed developmental timing of the leaf sink-source transition stage.
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity
The causes underlying the increased mortality of honeybee Apis mellifera colonies observed over the past decade remain unclear. Since so far the evidence for monocausal explanations is equivocal, involvement of multiple stressors is generally assumed. We here focus on various aspects of forage availability, which have received less attention than other stressors because it is virtually impossible to explore them empirically. We applied the colony model BEEHAVE, which links within-hive dynamics and foraging, to stylized landscape settings to explore how foraging distance, forage supply, and “forage gaps”, i.e. periods in which honeybees cannot find any nectar and pollen, affect colony resilience and the mechanisms behind. We found that colony extinction was mainly driven by foraging distance, but the timing of forage gaps had strongest effects on time to extinction. Sensitivity to forage gaps of 15 days was highest in June or July even if otherwise forage availability was sufficient to survive. Forage availability affected colonies via cascading effects on queen's egg-laying rate, reduction of new-emerging brood stages developing into adult workers, pollen debt, lack of workforce for nursing, and reduced foraging activity. Forage gaps in July led to reduction in egg-laying and increased mortality of brood stages at a time when the queen's seasonal egg-laying rate is at its maximum, leading to colony failure over time. Our results demonstrate that badly timed forage gaps interacting with poor overall forage supply reduce honeybee colony resilience. Existing regulation mechanisms which in principle enable colonies to cope with varying forage supply in a given landscape and year, such as a reduction in egg-laying, have only a certain capacity. Our results are hypothetical, as they are obtained from simplified landscape settings, but they are consistent with existing empirical knowledge. They offer ample opportunities for testing the predicted effects of forage stress in controlled experiments.