Refine
Document Type
- Article (13)
- Preprint (2)
- Part of Periodical (1)
- Postprint (1)
- Review (1)
Is part of the Bibliography
- yes (18)
Keywords
- individual-based model (3)
- population dynamics (3)
- bioenergetics (2)
- concepts (2)
- ecosystem services provisioning (2)
- ecosystems (2)
- management (2)
- resilience (2)
- Agent-based model (1)
- Air showers (1)
Introducing the CTA concept
(2013)
The Cherenkov Telescope Array (CTA) is a new observatory for very high-energy (VHE) gamma rays. CTA has ambitions science goals, for which it is necessary to achieve full-sky coverage, to improve the sensitivity by about an order of magnitude, to span about four decades of energy, from a few tens of GeV to above 100 TeV with enhanced angular and energy resolutions over existing VHE gamma-ray observatories. An international collaboration has formed with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America. In 2010 the CTA Consortium completed a Design Study and started a three-year Preparatory Phase which leads to production readiness of CTA in 2014. In this paper we introduce the science goals and the concept of CTA, and provide an overview of the project.
Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA.
The progress of science is tied to the standardization of measurements, instruments, and data. This is especially true in the Big Data age, where analyzing large data volumes critically hinges on the data being standardized. Accordingly, the lack of community-sanctioned data standards in paleoclimatology has largely precluded the benefits of Big Data advances in the field. Building upon recent efforts to standardize the format and terminology of paleoclimate data, this article describes the Paleoclimate Community reporTing Standard (PaCTS), a crowdsourced reporting standard for such data. PaCTS captures which information should be included when reporting paleoclimate data, with the goal of maximizing the reuse value of paleoclimate data sets, particularly for synthesis work and comparison to climate model simulations. Initiated by the LinkedEarth project, the process to elicit a reporting standard involved an international workshop in 2016, various forms of digital community engagement over the next few years, and grassroots working groups. Participants in this process identified important properties across paleoclimate archives, in addition to the reporting of uncertainties and chronologies; they also identified archive-specific properties and distinguished reporting standards for new versus legacy data sets. This work shows that at least 135 respondents overwhelmingly support a drastic increase in the amount of metadata accompanying paleoclimate data sets. Since such goals are at odds with present practices, we discuss a transparent path toward implementing or revising these recommendations in the near future, using both bottom-up and top-down approaches.
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers. (c) 2006 Elsevier B.V. All rights reserved.
In this article we report about a successful application of modern machine learning technology, namely Support Vector Machines, to the problem of assessing the 'drug-likeness' of a chemical from a given set of descriptors of the Substance. We were able to drastically improve the recent result by Byvatov et al. (2003) on this task and achieved an error rate of about 7% on unseen compounds using Support Vector Machines. We see a very high potential of such machine learning techniques for a variety of computational chemistry problems that occur in the drug discovery and drug design process
Agent-based models (ABMs) are widely used to predict how populations respond to changing environments. As the availability of food varies in space and time, individuals should have their own energy budgets, but there is no consensus as to how these should be modelled. Here, we use knowledge of physiological ecology to identify major issues confronting the modeller and to make recommendations about how energy budgets for use in ABMs should be constructed. Our proposal is that modelled animals forage as necessary to supply their energy needs for maintenance, growth and reproduction. If there is sufficient energy intake, an animal allocates the energy obtained in the order: maintenance, growth, reproduction, energy storage, until its energy stores reach an optimal level. If there is a shortfall, the priorities for maintenance and growth/reproduction remain the same until reserves fall to a critical threshold below which all are allocated to maintenance. Rates of ingestion and allocation depend on body mass and temperature. We make suggestions for how each of these processes should be modelled mathematically. Mortality rates vary with body mass and temperature according to known relationships, and these can be used to obtain estimates of background mortality rate. If parameter values cannot be obtained directly, then values may provisionally be obtained by parameter borrowing, pattern-oriented modelling, artificial evolution or from allometric equations. The development of ABMs incorporating individual energy budgets is essential for realistic modelling of populations affected by food availability. Such ABMs are already being used to guide conservation planning of nature reserves and shell fisheries, to assess environmental impacts of building proposals including wind farms and highways and to assess the effects on nontarget organisms of chemicals for the control of agricultural pests.
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity
Individual-based models (IBMs) predict how dynamics at higher levels of biological organization emerge from individual-level processes. This makes them a particularly useful tool for ecotoxicology, where the effects of toxicants are measured at the individual level but protection goals are often aimed at the population level or higher. However, one drawback of IBMs is that they require significant effort and data to design for each species. A solution would be to develop IBMs for chemical risk assessment that are based on generic individual-level models and theory. Here we show how one generic theory, Dynamic Energy Budget (DEB) theory, can be used to extrapolate the effect of toxicants measured at the individual level to effects on population dynamics. DEB is based on first principles in bioenergetics and uses a common model structure to model all species. Parameterization for a certain species is done at the individual level and allows to predict population-level effects of toxicants for a wide range of environmental conditions and toxicant concentrations. We present the general approach, which in principle can be used for all animal species, and give an example using Daphnia magna exposed to 3,4-dichloroaniline. We conclude that our generic approach holds great potential for standardized ecological risk assessment based on ecological models. Currently, available data from standard tests can directly be used for parameterization under certain circumstances, but with limited extra effort standard tests at the individual would deliver data that could considerably improve the applicability and precision of extrapolation to the population level. Specifically, the measurement of a toxicant's effect on growth in addition to reproduction, and presenting data over time as opposed to reporting a single EC50 or dose response curve at one time point.
Individual-based models (IBMs) are increasingly used to link the dynamics of individuals to higher levels of biological organization. Still, many IBMs are data hungry, species specific, and time-consuming to develop and analyze. Many of these issues would be resolved by using general theories of individual dynamics as the basis for IBMs. While such theories have frequently been examined at the individual level, few cross-level tests exist that also try to predict population dynamics. Here we performed a cross-level test of dynamic energy budget (DEB) theory by parameterizing an individual-based model using individual-level data of the water flea, Daphnia magna, and comparing the emerging population dynamics to independent data from population experiments. We found that DEB theory successfully predicted population growth rates and peak densities but failed to capture the decline phase. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small-and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detect gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology.
For the ecological risk assessment of toxic chemicals, standardized tests on individuals are often used as proxies for population-level effects. Here, we address the utility of one commonly used metric, reproductive output, as a proxy for population-level effects. Because reproduction integrates the outcome of many interacting processes (e.g., feeding, growth, allocation of energy to reproduction), the observed toxic effects in a reproduction test could be due to stress on one of many processes. Although this makes reproduction a robust endpoint for detecting stress, it may mask important population-level consequences if the different physiological processes stress affects are associated with different feedback mechanisms at the population level. We therefore evaluated how an observed reduction in reproduction found in a standard reproduction test translates to effects at the population level if it is caused by hypothetical toxicants affecting different physiological processes (physiological modes of action; PMoA). For this we used two consumer-resource models: the Yodzis-Innes (YI) model, which is mathematically tractable, but requires strong assumptions of energetic equivalence among individuals as they progress through ontogeny, and an individual-based implementation of dynamic energy budget theory (DEB-IBM), which relaxes these assumptions at the expense of tractability. We identified two important feedback mechanisms controlling the link between individual- and population-level stress in the YI model. These mechanisms turned out to also be important for interpreting some of the individual-based model results; for two PMoAs, they determined the population response to stress in both models. In contrast, others stress types involved more complex feedbacks, because they asymmetrically stressed the production efficiency of reproduction and somatic growth. The feedbacks associated with different PMoAs drastically altered the link between individual- and population-level effects. For example, hypothetical stressors with different PMoAs that had equal effects on reproduction had effects ranging from a negligible decline in biomass to population extinction. Thus, reproduction tests alone are of little use for extrapolating toxicity to the population level, but we showed that the ecological relevance of standard tests could easily be improved if growth is measured along with reproduction.
The wood mouse is a common and abundant species in agricultural landscape and is a focal species in pesticide risk assessment. Empirical studies on the ecology of the wood mouse have provided sufficient information for the species to be modelled mechanistically. An individual-based model was constructed to explicitly represent the locations and movement patterns of individual mice. This together with the schedule of pesticide application allows prediction of the risk to the population from pesticide exposure. The model included life-history traits of wood mice as well as typical landscape dynamics in agricultural farmland in the UK. The model obtains a good fit to the available population data and is fit for risk assessment purposes. It can help identify spatio-temporal situations with the largest potential risk of exposure and enables extrapolation from individual-level endpoints to population-level effects. Largest risk of exposure to pesticides was found when good crop growth in the "sink" fields coincided with high "source" population densities in the hedgerows.
Ecosystems respond in various ways to disturbances. Quantifying ecological stability therefore requires inspecting multiple stability properties, such as resistance, recovery, persistence and invariability. Correlations among these properties can reduce the dimensionality of stability, simplifying the study of environmental effects on ecosystems. A key question is how the kind of disturbance affects these correlations. We here investigated the effect of three disturbance types (random, species-specific, local) applied at four intensity levels, on the dimensionality of stability at the population and community level. We used previously parameterized models that represent five natural communities, varying in species richness and the number of trophic levels. We found that disturbance type but not intensity affected the dimensionality of stability and only at the population level. The dimensionality of stability also varied greatly among species and communities. Therefore, studying stability cannot be simplified to using a single metric and multi-dimensional assessments are still to be recommended.
The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word 'documentation' does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model 'evaludation': the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples. (C) 2014 Elsevier B.V. All rights reserved.
Resilience trinity
(2020)
Ensuring ecosystem resilience is an intuitive approach to safeguard the functioning of ecosystems and hence the future provisioning of ecosystem services (ES). However, resilience is a multi-faceted concept that is difficult to operationalize. Focusing on resilience mechanisms, such as diversity, network architectures or adaptive capacity, has recently been suggested as means to operationalize resilience. Still, the focus on mechanisms is not specific enough. We suggest a conceptual framework, resilience trinity, to facilitate management based on resilience mechanisms in three distinctive decision contexts and time-horizons: 1) reactive, when there is an imminent threat to ES resilience and a high pressure to act, 2) adjustive, when the threat is known in general but there is still time to adapt management and 3) provident, when time horizons are very long and the nature of the threats is uncertain, leading to a low willingness to act. Resilience has different interpretations and implications at these different time horizons, which also prevail in different disciplines. Social ecology, ecology and engineering are often implicitly focussing on provident, adjustive or reactive resilience, respectively, but these different notions of resilience and their corresponding social, ecological and economic tradeoffs need to be reconciled. Otherwise, we keep risking unintended consequences of reactive actions, or shying away from provident action because of uncertainties that cannot be reduced. The suggested trinity of time horizons and their decision contexts could help ensuring that longer-term management actions are not missed while urgent threats to ES are given priority.
Improving our understanding of biodiversity and ecosystem functioning and our capacity to inform ecosystem management requires an integrated framework for functional biodiversity research (FBR). However, adequate integration among empirical approaches (monitoring and experimental) and modelling has rarely been achieved in FBR. We offer an appraisal of the issues involved and chart a course towards enhanced integration. A major element of this path is the joint orientation towards the continuous refinement of a theoretical framework for FBR that links theory testing and generalization with applied research oriented towards the conservation of biodiversity and ecosystem functioning. We further emphasize existing decision-making frameworks as suitable instruments to practically merge these different aims of FBR and bring them into application. This integrated framework requires joint research planning, and should improve communication and stimulate collaboration between modellers and empiricists, thereby overcoming existing reservations and prejudices. The implementation of this integrative research agenda for FBR requires an adaptation in most national and international funding schemes in order to accommodate such joint teams and their more complex structures and data needs.
Resilience trinity
(2020)
Ensuring ecosystem resilience is an intuitive approach to safeguard the functioning of ecosystems and hence the future provisioning of ecosystem services (ES). However, resilience is a multi-faceted concept that is difficult to operationalize. Focusing on resilience mechanisms, such as diversity, network architectures or adaptive capacity, has recently been suggested as means to operationalize resilience. Still, the focus on mechanisms is not specific enough. We suggest a conceptual framework, resilience trinity, to facilitate management based on resilience mechanisms in three distinctive decision contexts and time-horizons: 1) reactive, when there is an imminent threat to ES resilience and a high pressure to act, 2) adjustive, when the threat is known in general but there is still time to adapt management and 3) provident, when time horizons are very long and the nature of the threats is uncertain, leading to a low willingness to act. Resilience has different interpretations and implications at these different time horizons, which also prevail in different disciplines. Social ecology, ecology and engineering are often implicitly focussing on provident, adjustive or reactive resilience, respectively, but these different notions of resilience and their corresponding social, ecological and economic tradeoffs need to be reconciled. Otherwise, we keep risking unintended consequences of reactive actions, or shying away from provident action because of uncertainties that cannot be reduced. The suggested trinity of time horizons and their decision contexts could help ensuring that longer-term management actions are not missed while urgent threats to ES are given priority.
Das 10. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Panorama Patholinguistik: Sprachwissenschaft trifft Sprachtherapie« fand am 19.11.2016 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die vier Hauptvorträge zum Schwerpunktthema sowie Beiträge zu den Kurzvorträgen »Patholinguistik im Fokus« und der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.