Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (19)
- Postprint (10)
- Dissertation (8)
- Konferenzveröffentlichung (2)
- Rezension (1)
Gehört zur Bibliographie
- ja (40) (entfernen)
Schlagworte
- prediction (40) (entfernen)
Institut
- Department Psychologie (5)
- Institut für Biochemie und Biologie (5)
- Institut für Physik und Astronomie (4)
- Institut für Umweltwissenschaften und Geographie (3)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Department Linguistik (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Hochschulambulanz (2)
- Institut für Ernährungswissenschaft (2)
- Institut für Mathematik (2)
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.
Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions-fitness maximization by individual organisms-is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
Predicting Paris: Multi-Method Approaches to Forecast the Outcomes of Global Climate Negotiations
(2016)
We examine the negotiations held under the auspices of the United Nations Framework Convention of Climate Change in Paris, December 2015. Prior to these negotiations, there was considerable uncertainty about whether an agreement would be reached, particularly given that the world’s leaders failed to do so in the 2009 negotiations held in Copenhagen. Amid this uncertainty, we applied three different methods to predict the outcomes: an expert survey and two negotiation simulation models, namely the Exchange Model and the Predictioneer’s Game. After the event, these predictions were assessed against the coded texts that were agreed in Paris. The evidence suggests that combining experts’ predictions to reach a collective expert prediction makes for significantly more accurate predictions than individual experts’ predictions. The differences in the performance between the two different negotiation simulation models were not statistically significant.
Background:
Endomyocardial biopsy is considered as the gold standard in patients with suspected myocarditis. We aimed to evaluate the impact of bioptic findings on prediction of successful return to work.
Methods:
In 1153 patients (48.9 ± 12.4 years, 66.2% male), who were hospitalized due to symptoms of left heart failure between 2005 and 2012, an endomyocardial biopsy was performed. Routine clinical and laboratory data, sociodemographic parameters, and noninvasive and invasive cardiac variables including endomyocardial biopsy were registered. Data were linked with return to work data from the German statutory pension insurance program and analyzed by Cox regression.
Results:
A total of 220 patients had a complete data set of hospital and insurance information. Three quarters of patients were virus-positive (54.2% parvovirus B19, other or mixed infection 16.7%). Mean invasive left ventricular ejection fraction was 47.1% ± 18.6% (left ventricular ejection fraction <45% in 46.3%). Return to work was achieved after a mean interval of 168.8 ± 347.7 days in 220 patients (after 6, 12, and 24 months in 61.3%, 72.2%, and 76.4%). In multivariate regression analysis, only age (per 10 years, hazard ratio, 1.27; 95% confidence interval, 1.10–1.46; p = 0.001) and left ventricular ejection fraction (per 5% increase, hazard ratio, 1.07; 95% confidence interval, 1.03–1.12; p = 0.002) were associated with increased, elevated work intensity (heavy vs light, congestive heart failure, 0.58; 95% confidence interval, 0.34–0.99; p < 0.049) with decreased probability of return to work. None of the endomyocardial biopsy–derived parameters was significantly associated with return to work in the total group as well as in the subgroup of patients with biopsy-proven myocarditis.
Conclusion:
Added to established predictors, bioptic data demonstrated no additional impact for return to work probability. Thus, socio-medical evaluation of patients with suspected myocarditis furthermore remains an individually oriented process based primarily on clinical and functional parameters.
The Limpopo Basin in southern Africa is prone to droughts which affect the livelihood of millions of people in South Africa, Botswana, Zimbabwe and Mozambique. Seasonal drought early warning is thus vital for the whole region. In this study, the predictability of hydrological droughts during the main runoff period from December to May is assessed using statistical approaches. Three methods (multiple linear models, artificial neural networks, random forest regression trees) are compared in terms of their ability to forecast streamflow with up to 12 months of lead time. The following four main findings result from the study.
1. There are stations in the basin at which standardised streamflow is predictable with lead times up to 12 months. The results show high inter-station differences of forecast skill but reach a coefficient of determination as high as 0.73 (cross validated).
2. A large range of potential predictors is considered in this study, comprising well-established climate indices, customised teleconnection indices derived from sea surface temperatures and antecedent streamflow as a proxy of catchment conditions. El Nino and customised indices, representing sea surface temperature in the Atlantic and Indian oceans, prove to be important teleconnection predictors for the region. Antecedent streamflow is a strong predictor in small catchments (with median 42% explained variance), whereas teleconnections exert a stronger influence in large catchments.
3. Multiple linear models show the best forecast skill in this study and the greatest robustness compared to artificial neural networks and random forest regression trees, despite their capabilities to represent nonlinear relationships.
4. Employed in early warning, the models can be used to forecast a specific drought level. Even if the coefficient of determination is low, the forecast models have a skill better than a climatological forecast, which is shown by analysis of receiver operating characteristics (ROCs). Seasonal statistical forecasts in the Limpopo show promising results, and thus it is recommended to employ them as complementary to existing forecasts in order to strengthen preparedness for droughts.
Background:
Endomyocardial biopsy is considered as the gold standard in patients with suspected myocarditis. We aimed to evaluate the impact of bioptic findings on prediction of successful return to work.
Methods:
In 1153 patients (48.9 ± 12.4 years, 66.2% male), who were hospitalized due to symptoms of left heart failure between 2005 and 2012, an endomyocardial biopsy was performed. Routine clinical and laboratory data, sociodemographic parameters, and noninvasive and invasive cardiac variables including endomyocardial biopsy were registered. Data were linked with return to work data from the German statutory pension insurance program and analyzed by Cox regression.
Results:
A total of 220 patients had a complete data set of hospital and insurance information. Three quarters of patients were virus-positive (54.2% parvovirus B19, other or mixed infection 16.7%). Mean invasive left ventricular ejection fraction was 47.1% ± 18.6% (left ventricular ejection fraction <45% in 46.3%). Return to work was achieved after a mean interval of 168.8 ± 347.7 days in 220 patients (after 6, 12, and 24 months in 61.3%, 72.2%, and 76.4%). In multivariate regression analysis, only age (per 10 years, hazard ratio, 1.27; 95% confidence interval, 1.10–1.46; p = 0.001) and left ventricular ejection fraction (per 5% increase, hazard ratio, 1.07; 95% confidence interval, 1.03–1.12; p = 0.002) were associated with increased, elevated work intensity (heavy vs light, congestive heart failure, 0.58; 95% confidence interval, 0.34–0.99; p < 0.049) with decreased probability of return to work. None of the endomyocardial biopsy–derived parameters was significantly associated with return to work in the total group as well as in the subgroup of patients with biopsy-proven myocarditis.
Conclusion:
Added to established predictors, bioptic data demonstrated no additional impact for return to work probability. Thus, socio-medical evaluation of patients with suspected myocarditis furthermore remains an individually oriented process based primarily on clinical and functional parameters.
In the present work, we use symbolic regression for automated modeling of dynamical systems. Symbolic regression is a powerful and general method suitable for data-driven identification of mathematical expressions. In particular, the structure and parameters of those expressions are identified simultaneously.
We consider two main variants of symbolic regression: sparse regression-based and genetic programming-based symbolic regression. Both are applied to identification, prediction and control of dynamical systems.
We introduce a new methodology for the data-driven identification of nonlinear dynamics for systems undergoing abrupt changes. Building on a sparse regression algorithm derived earlier, the model after the change is defined as a minimum update with respect to a reference model of the system identified prior to the change. The technique is successfully exemplified on the chaotic Lorenz system and the van der Pol oscillator. Issues such as computational complexity, robustness against noise and requirements with respect to data volume are investigated.
We show how symbolic regression can be used for time series prediction. Again, issues such as robustness against noise and convergence rate are investigated us- ing the harmonic oscillator as a toy problem. In combination with embedding, we demonstrate the prediction of a propagating front in coupled FitzHugh-Nagumo oscillators. Additionally, we show how we can enhance numerical weather predictions to commercially forecast power production of green energy power plants.
We employ symbolic regression for synchronization control in coupled van der Pol oscillators. Different coupling topologies are investigated. We address issues such as plausibility and stability of the control laws found. The toolkit has been made open source and is used in turbulence control applications.
Genetic programming based symbolic regression is very versatile and can be adapted to many optimization problems. The heuristic-based algorithm allows for cost efficient optimization of complex tasks.
We emphasize the ability of symbolic regression to yield white-box models. In contrast to black-box models, such models are accessible and interpretable which allows the usage of established tool chains.
We propose a reduced dynamical system describing the coupled evolution of fluid flow and magnetic field at the top of the Earth's core between the years 1900 and 2014. The flow evolution is modeled with a first-order autoregressive process, while the magnetic field obeys the classical frozen flux equation. An ensemble Kalman filter algorithm serves to constrain the dynamics with the geomagnetic field and its secular variation given by the COV-OBS.x1 model. Using a large ensemble with 40,000 members provides meaningful statistics including reliable error estimates. The model highlights two distinct flow scales. Slowly varying large-scale elements include the already documented eccentric gyre. Localized short-lived structures include distinctly ageostophic features like the high-latitude polar jet on the Northern Hemisphere. Comparisons with independent observations of the length-of-day variations not only validate the flow estimates but also suggest an acceleration of the geostrophic flows over the last century. Hindcasting tests show that our model outperforms simpler predictions bases (linear extrapolation and stationary flow). The predictability limit, of about 2,000 years for the magnetic dipole component, is mostly determined by the random fast varying dynamics of the flow and much less by the geomagnetic data quality or lack of small-scale information.