Refine
Year of publication
Document Type
- Article (10)
- Postprint (2)
- Doctoral Thesis (1)
- Other (1)
Language
- English (14) (remove)
Is part of the Bibliography
- yes (14)
Keywords
- forecasting (14) (remove)
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
A volcanic eruption is usually preceded by seismic precursors, but their interpretation and use for forecasting the eruption onset time remain a challenge. A part of the eruptive processes in open conduits of volcanoes may be similar to those encountered in geysers. Since geysers erupt more often, they are useful sites for testing new forecasting methods. We tested the application of Permutation Entropy (PE) as a robust method to assess the complexity in seismic recordings of the Strokkur geyser, Iceland. Strokkur features several minute-long eruptive cycles, enabling us to verify in 63 recorded cycles whether PE behaves consistently from one eruption to the next one. We performed synthetic tests to understand the effect of different parameter settings in the PE calculation. Our application to Strokkur shows a distinct, repeating PE pattern consistent with previously identified phases in the eruptive cycle. We find a systematic increase in PE within the last 15 s before the eruption, indicating that an eruption will occur. We quantified the predictive power of PE, showing that PE performs better than seismic signal strength or quiescence when it comes to forecasting eruptions.
Predicting Paris: Multi-Method Approaches to Forecast the Outcomes of Global Climate Negotiations
(2016)
We examine the negotiations held under the auspices of the United Nations Framework Convention of Climate Change in Paris, December 2015. Prior to these negotiations, there was considerable uncertainty about whether an agreement would be reached, particularly given that the world’s leaders failed to do so in the 2009 negotiations held in Copenhagen. Amid this uncertainty, we applied three different methods to predict the outcomes: an expert survey and two negotiation simulation models, namely the Exchange Model and the Predictioneer’s Game. After the event, these predictions were assessed against the coded texts that were agreed in Paris. The evidence suggests that combining experts’ predictions to reach a collective expert prediction makes for significantly more accurate predictions than individual experts’ predictions. The differences in the performance between the two different negotiation simulation models were not statistically significant.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
Firms engage in forecasting and foresight activities to predict the future or explore possible future states of the business environment in order to pre-empt and shape it (corporate foresight). Similarly, the dynamic capabilities approach addresses relevant firm capabilities to adapt to fast change in an environment that threatens a firm’s competitiveness and survival. However, despite these conceptual similarities, their relationship remains opaque. To close this gap, we conduct qualitative interviews with foresight experts as an exploratory study. Our results show that foresight and dynamic capabilities aim at an organizational renewal to meet future challenges. Foresight can be regarded as a specific activity that corresponds with the sensing process of dynamic capabilities. The experts disagree about the relationship between foresight and sensing and see no direct links with transformation. However, foresight can better inform post-sensing activities and, therefore, indirectly contribute to the adequate reconfiguration of the resource base, an increased innovativeness, and firm performance.
Firms engage in forecasting and foresight activities to predict the future or explore possible future states of the business environment in order to pre-empt and shape it (corporate foresight). Similarly, the dynamic capabilities approach addresses relevant firm capabilities to adapt to fast change in an environment that threatens a firm’s competitiveness and survival. However, despite these conceptual similarities, their relationship remains opaque. To close this gap, we conduct qualitative interviews with foresight experts as an exploratory study. Our results show that foresight and dynamic capabilities aim at an organizational renewal to meet future challenges. Foresight can be regarded as a specific activity that corresponds with the sensing process of dynamic capabilities. The experts disagree about the relationship between foresight and sensing and see no direct links with transformation. However, foresight can better inform post-sensing activities and, therefore, indirectly contribute to the adequate reconfiguration of the resource base, an increased innovativeness, and firm performance.
Leadership development (LD) is a crucial success factor for startups to increase their human capital, survival rate, and overall performance. However, only a minority of young ventures actively engage in LD, and research rather focuses on large corporations and SMEs, which do not share the typical startup characteristics such as a rather young workforce, flat hierarchies, resource scarcity, and high time pressure. To overcome this practical and theoretical lack of knowledge, we engage in foresight and explore which leadership development techniques will be most relevant for startups within the next five to ten years. To formulate the most probable scenario, we conduct an international, two-stage Delphi study with 27 projections among industry experts. According to the expert panel, the majority of startups will engage in leadership development over the next decade. Most startups will aim to develop the leadership capabilities of their workforce as a whole and use external support. The most prominent prospective LD measures in startups include experiential learning methods, such as action learning, developmental job assignments, multi-rater feedback, as well as digital experiential learning programs, and developmental relationships such as coaching in digital one-to-one sessions. Self-managed learning will play a more important role than formal training.
The intrinsic predictability of ecological time series and its potential to guide forecasting
(2019)
P>Despite ample research, understanding plant spread and predicting their ability to track projected climate changes remain a formidable challenge to be confronted. We modelled the spread of North American wind-dispersed trees in current and future (c. 2060) conditions, accounting for variation in 10 key dispersal, demographic and environmental factors affecting population spread. Predicted spread rates vary substantially among 12 study species, primarily due to inter-specific variation in maturation age, fecundity and seed terminal velocity. Future spread is predicted to be faster if atmospheric CO2 enrichment would increase fecundity and advance maturation, irrespective of the projected changes in mean surface windspeed. Yet, for only a few species, predicted wind-driven spread will match future climate changes, conditioned on seed abscission occurring only in strong winds and environmental conditions favouring high survival of the farthest-dispersed seeds. Because such conditions are unlikely, North American wind-dispersed trees are expected to lag behind the projected climate range shift.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.