Refine
Language
- English (5) (remove)
Is part of the Bibliography
- yes (5)
Keywords
- Landslide inventory (2)
- Landslide susceptibility (2)
- Southern Kyrgyzstan (2)
- Bayesian model (1)
- Data collection (1)
- Earthquake loss modelling (1)
- Earthquake scenario (1)
- Epistemic uncertainty (1)
- Faceted taxonomy (1)
- Logistic regression (1)
- OpenStreetMap (1)
- Probabilistic exposure modelling (1)
- Remote sensing (1)
- Scheme (1)
- Sensitivity analysis (1)
- Spatially cross-correlated ground motion (1)
- buildings (1)
- downscaling (1)
- early warning (1)
- earthquake (1)
- exposure (1)
- fields (1)
- ground motion fields (1)
- impact forecasting (1)
- logistic regression (1)
- natural hazards (1)
- remote sensing (1)
- risk (1)
- sensitivity (1)
- vulnerability (1)
Institute
Much of contemporary landslide research is concerned with predicting and mapping susceptibility to slope failure. Many studies rely on generalised linear models with environmental predictors that are trained with data collected from within and outside of the margins of mapped landslides. Whether and how the performance of these models depends on sample size, location, or time remains largely untested. We address this question by exploring the sensitivity of a multivariate logistic regression-one of the most widely used susceptibility models-to data sampled from different portions of landslides in two independent inventories (i.e. a historic and a multi-temporal) covering parts of the eastern rim of the Fergana Basin, Kyrgyzstan. We find that considering only areas on lower parts of landslides, and hence most likely their deposits, can improve the model performance by >10% over the reference case that uses the entire landslide areas, especially for landslides of intermediate size. Hence, using landslide toe areas may suffice for this particular model and come in useful where landslide scars are vague or hidden in this part of Central Asia. The model performance marginally varied after progressively updating and adding more landslides data through time. We conclude that landslide susceptibility estimates for the study area remain largely insensitive to changes in data over about a decade. Spatial or temporal stratified sampling contributes only minor variations to model performance. Our findings call for more extensive testing of the concept of dynamic susceptibility and its interpretation in data-driven models, especially within the broader framework of landslide risk assessment under environmental and land-use change.
Much of contemporary landslide research is concerned with predicting and mapping susceptibility to slope failure. Many studies rely on generalised linear models with environmental predictors that are trained with data collected from within and outside of the margins of mapped landslides. Whether and how the performance of these models depends on sample size, location, or time remains largely untested. We address this question by exploring the sensitivity of a multivariate logistic regression-one of the most widely used susceptibility models-to data sampled from different portions of landslides in two independent inventories (i.e. a historic and a multi-temporal) covering parts of the eastern rim of the Fergana Basin, Kyrgyzstan. We find that considering only areas on lower parts of landslides, and hence most likely their deposits, can improve the model performance by >10% over the reference case that uses the entire landslide areas, especially for landslides of intermediate size. Hence, using landslide toe areas may suffice for this particular model and come in useful where landslide scars are vague or hidden in this part of Central Asia. The model performance marginally varied after progressively updating and adding more landslides data through time. We conclude that landslide susceptibility estimates for the study area remain largely insensitive to changes in data over about a decade. Spatial or temporal stratified sampling contributes only minor variations to model performance. Our findings call for more extensive testing of the concept of dynamic susceptibility and its interpretation in data-driven models, especially within the broader framework of landslide risk assessment under environmental and land-use change.
Forecasting and early warning systems are important investments to protect lives, properties, and livelihood. While early warning systems are frequently used to predict the magnitude, location, and timing of potentially damaging events, these systems rarely provide impact estimates, such as the expected amount and distribution of physical damage, human consequences, disruption of services, or financial loss. Complementing early warning systems with impact forecasts has a twofold advantage: It would provide decision makers with richer information to take informed decisions about emergency measures and focus the attention of different disciplines on a common target. This would allow capitalizing on synergies between different disciplines and boosting the development of multihazard early warning systems. This review discusses the state of the art in impact forecasting for a wide range of natural hazards. We outline the added value of impact-based warnings compared to hazard forecasting for the emergency phase, indicate challenges and pitfalls, and synthesize the review results across hazard types most relevant for Europe.
In seismic risk assessment, the sources of uncertainty associated with building exposure modelling have not received as much attention as other components related to hazard and vulnerability. Conventional practices such as assuming absolute portfolio compositions (i.e., proportions per building class) from expert-based assumptions over aggregated data crudely disregard the contribution of uncertainty of the exposure upon earthquake loss models. In this work, we introduce the concept that the degree of knowledge of a building stock can be described within a Bayesian probabilistic approach that integrates both expert-based prior distributions and data collection on individual buildings. We investigate the impact of the epistemic uncertainty in the portfolio composition on scenario-based earthquake loss models through an exposure-oriented logic tree arrangement based on synthetic building portfolios. For illustrative purposes, we consider the residential building stock of Valparaiso (Chile) subjected to seismic ground-shaking from one subduction earthquake. We have found that building class reconnaissance, either from prior assumptions by desktop studies with aggregated data (top-down approach), or from building-by-building data collection (bottom-up approach), plays a fundamental role in the statistical modelling of exposure. To model the vulnerability of such a heterogeneous building stock, we require that their associated set of structural fragility functions handle multiple spectral periods. Thereby, we also discuss the relevance and specific uncertainty upon generating either uncorrelated or spatially cross-correlated ground motion fields within this framework. We successively show how various epistemic uncertainties embedded within these probabilistic exposure models are differently propagated throughout the computed direct financial losses. This work calls for further efforts to redesign desktop exposure studies, while also highlighting the importance of exposure data collection with standardized and iterative approaches.
Efforts have been made in the past to enhance building exposure models on a regional scale with increasing spatial resolutions by integrating different data sources. This work follows a similar path and focuses on the downscaling of the existing SARA exposure model that was proposed for the residential building stock of the communes of Valparaiso and Vina del Mar (Chile). Although this model allowed great progress in harmonising building classes and characterising their differential physical vulnerabilities, it is now outdated, and in any case, it is spatially aggregated over large administrative units. Hence, to more accurately consider the impact of future earthquakes on these cities, it is necessary to employ more reliable exposure models. For such a purpose, we propose updating this existing model through a Bayesian approach by integrating ancillary data that has been made increasingly available from Volunteering Geo-Information (VGI) activities. Its spatial representation is also optimised in higher resolution aggregation units that avoid the inconvenience of having incomplete building-by-building footprints. A worst-case earthquake scenario is presented to calculate direct economic losses and highlight the degree of uncertainty imposed by exposure models in comparison with other parameters used to generate the seismic ground motions within a sensitivity analysis. This example study shows the great potential of using increasingly available VGI to update worldwide building exposure models as well as its importance in scenario-based seismic risk assessment.