Refine
Year of publication
Document Type
- Article (19)
- Postprint (10)
- Doctoral Thesis (9)
- Conference Proceeding (2)
- Review (1)
Keywords
- prediction (41) (remove)
Institute
- Department Psychologie (5)
- Institut für Biochemie und Biologie (5)
- Institut für Physik und Astronomie (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Department Linguistik (2)
- Hochschulambulanz (2)
- Institut für Ernährungswissenschaft (2)
- Institut für Mathematik (2)
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and traitbased models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and traitbased models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
SLocX predicting subcellular localization of Arabidopsis proteins leveraging gene expression data
(2011)
Despite the growing volume of experimentally validated knowledge about the subcellular localization of plant proteins, a well performing in silico prediction tool is still a necessity. Existing tools, which employ information derived from protein sequence alone, offer limited accuracy and/or rely on full sequence availability. We explored whether gene expression profiling data can be harnessed to enhance prediction performance. To achieve this, we trained several support vector machines to predict the subcellular localization of Arabidopsis thaliana proteins using sequence derived information, expression behavior, or a combination of these data and compared their predictive performance through a cross-validation test. We show that gene expression carries information about the subcellular localization not available in sequence information, yielding dramatic benefits for plastid localization prediction, and some notable improvements for other compartments such as the mito-chondrion, the Golgi, and the plasma membrane. Based on these results, we constructed a novel subcellular localization prediction engine, SLocX, combining gene expression profiling data with protein sequence-based information. We then validated the results of this engine using an independent test set of annotated proteins and a transient expression of GFP fusion proteins. Here, we present the prediction framework and a website of predicted localizations for Arabidopsis. The relatively good accuracy of our prediction engine, even in cases where only partial protein sequence is available (e.g., in sequences lacking the N-terminal region), offers a promising opportunity for similar application to non-sequenced or poorly annotated plant species. Although the prediction scope of our method is currently limited by the availability of expression information on the ATH1 array, we believe that the advances in measuring gene expression technology will make our method applicable for all Arabidopsis proteins.
Species respond to environmental change by dynamically adjusting their geographical ranges. Robust predictions of these changes are prerequisites to inform dynamic and sustainable conservation strategies. Correlative species distribution models (SDMs) relate species’ occurrence records to prevailing environmental factors to describe the environmental niche. They have been widely applied in global change context as they have comparably low data requirements and allow for rapid assessments of potential future species’ distributions. However, due to their static nature, transient responses to environmental change are essentially ignored in SDMs. Furthermore, neither dispersal nor demographic processes and biotic interactions are explicitly incorporated. Therefore, it has often been suggested to link statistical and mechanistic modelling approaches in order to make more realistic predictions of species’ distributions for scenarios of environmental change. In this thesis, I present two different ways of such linkage. (i) Mechanistic modelling can act as virtual playground for testing statistical models and allows extensive exploration of specific questions. I promote this ‘virtual ecologist’ approach as a powerful evaluation framework for testing sampling protocols, analyses and modelling tools. Also, I employ such an approach to systematically assess the effects of transient dynamics and ecological properties and processes on the prediction accuracy of SDMs for climate change projections. That way, relevant mechanisms are identified that shape the species’ response to altered environmental conditions and which should hence be considered when trying to project species’ distribution through time. (ii) I supplement SDM projections of potential future habitat for black grouse in Switzerland with an individual-based population model. By explicitly considering complex interactions between habitat availability and demographic processes, this allows for a more direct assessment of expected population response to environmental change and associated extinction risks. However, predictions were highly variable across simulations emphasising the need for principal evaluation tools like sensitivity analysis to assess uncertainty and robustness in dynamic range predictions. Furthermore, I identify data coverage of the environmental niche as a likely cause for contrasted range predictions between SDM algorithms. SDMs may fail to make reliable predictions for truncated and edge niches, meaning that portions of the niche are not represented in the data or niche edges coincide with data limits. Overall, my thesis contributes to an improved understanding of uncertainty factors in predictions of range dynamics and presents ways how to deal with these. Finally I provide preliminary guidelines for predictive modelling of dynamic species’ response to environmental change, identify key challenges for future research and discuss emerging developments.
Background
High blood glucose and diabetes are amongst the conditions causing the greatest losses in years of healthy life worldwide. Therefore, numerous studies aim to identify reliable risk markers for development of impaired glucose metabolism and type 2 diabetes. However, the molecular basis of impaired glucose metabolism is so far insufficiently understood. The development of so called 'omics' approaches in the recent years promises to identify molecular markers and to further understand the molecular basis of impaired glucose metabolism and type 2 diabetes. Although univariate statistical approaches are often applied, we demonstrate here that the application of multivariate statistical approaches is highly recommended to fully capture the complexity of data gained using high-throughput methods.
Methods
We took blood plasma samples from 172 subjects who participated in the prospective Metabolic Syndrome Berlin Potsdam follow-up study (MESY-BEPO Follow-up). We analysed these samples using Gas Chromatography coupled with Mass Spectrometry (GC-MS), and measured 286 metabolites. Furthermore, fasting glucose levels were measured using standard methods at baseline, and after an average of six years. We did correlation analysis and built linear regression models as well as Random Forest regression models to identify metabolites that predict the development of fasting glucose in our cohort.
Results
We found a metabolic pattern consisting of nine metabolites that predicted fasting glucose development with an accuracy of 0.47 in tenfold cross-validation using Random Forest regression. We also showed that adding established risk markers did not improve the model accuracy. However, external validation is eventually desirable. Although not all metabolites belonging to the final pattern are identified yet, the pattern directs attention to amino acid metabolism, energy metabolism and redox homeostasis.
Conclusions
We demonstrate that metabolites identified using a high-throughput method (GC-MS) perform well in predicting the development of fasting plasma glucose over several years. Notably, not single, but a complex pattern of metabolites propels the prediction and therefore reflects the complexity of the underlying molecular mechanisms. This result could only be captured by application of multivariate statistical approaches. Therefore, we highly recommend the usage of statistical methods that seize the complexity of the information given by high-throughput methods.
Predicting the actions of other individuals is crucial for our daily interactions. Recent evidence suggests that the prediction of object-directed arm and full-body actions employs the dorsal premotor cortex (PMd). Thus, the neural substrate involved in action control may also be essential for action prediction. Here, we aimed to address this issue and hypothesized that disrupting the PMd impairs action prediction. Using fMRI-guided coil navigation, rTMS (five pulses, 10Hz) was applied over the left PMd and over the vertex (control region) while participants observed everyday actions in video clips that were transiently occluded for 1s. The participants detected manipulations in the time course of occluded actions, which required them to internally predict the actions during occlusion. To differentiate between functional roles that the PMd could play in prediction, rTMS was either delivered at occluder-onset (TMS-early), affecting the initiation of action prediction, or 300 ms later during occlusion(TMS-late), affecting the maintenance of anongoing prediction. TMS-early over the left PMd produced more prediction errors than TMS-early over the vertex. TMS-late had no effect on prediction performance, suggesting that the left PMd might be involved particularly during the initiation of internally guided action prediction but may play a subordinate role in maintaining ongoing prediction. These findings open a new perspective on the role of the left PMd in action prediction which is in line with its functions in action control and in cognitive tasks. In the discussion, there levance of the left PMd for integrating external action parameters with the observer's motor repertoire is emphasized. Overall, the results are in line with the notion that premotor functions are employed in both action control and action observation.
Aim Biotic interactions within guilds or across trophic levels have widely been ignored in species distribution models (SDMs). This synthesis outlines the development of species interaction distribution models (SIDMs), which aim to incorporate multispecies interactions at large spatial extents using interaction matrices. Location Local to global. Methods We review recent approaches for extending classical SDMs to incorporate biotic interactions, and identify some methodological and conceptual limitations. To illustrate possible directions for conceptual advancement we explore three principal ways of modelling multispecies interactions using interaction matrices: simple qualitative linkages between species, quantitative interaction coefficients reflecting interaction strengths, and interactions mediated by interaction currencies. We explain methodological advancements for static interaction data and multispecies time series, and outline methods to reduce complexity when modelling multispecies interactions. Results Classical SDMs ignore biotic interactions and recent SDM extensions only include the unidirectional influence of one or a few species. However, novel methods using error matrices in multivariate regression models allow interactions between multiple species to be modelled explicitly with spatial co-occurrence data. If time series are available, multivariate versions of population dynamic models can be applied that account for the effects and relative importance of species interactions and environmental drivers. These methods need to be extended by incorporating the non-stationarity in interaction coefficients across space and time, and are challenged by the limited empirical knowledge on spatio-temporal variation in the existence and strength of species interactions. Model complexity may be reduced by: (1) using prior ecological knowledge to set a subset of interaction coefficients to zero, (2) modelling guilds and functional groups rather than individual species, and (3) modelling interaction currencies and species effect and response traits. Main conclusions There is great potential for developing novel approaches that incorporate multispecies interactions into the projection of species distributions and community structure at large spatial extents. Progress can be made by: (1) developing statistical models with interaction matrices for multispecies co-occurrence datasets across large-scale environmental gradients, (2) testing the potential and limitations of methods for complexity reduction, and (3) sampling and monitoring comprehensive spatio-temporal data on biotic interactions in multispecies communities.
Learning a model for the relationship between the attributes and the annotated labels of data examples serves two purposes. Firstly, it enables the prediction of the label for examples without annotation. Secondly, the parameters of the model can provide useful insights into the structure of the data. If the data has an inherent partitioned structure, it is natural to mirror this structure in the model. Such mixture models predict by combining the individual predictions generated by the mixture components which correspond to the partitions in the data. Often the partitioned structure is latent, and has to be inferred when learning the mixture model. Directly evaluating the accuracy of the inferred partition structure is, in many cases, impossible because the ground truth cannot be obtained for comparison. However it can be assessed indirectly by measuring the prediction accuracy of the mixture model that arises from it. This thesis addresses the interplay between the improvement of predictive accuracy by uncovering latent cluster structure in data, and further addresses the validation of the estimated structure by measuring the accuracy of the resulting predictive model. In the application of filtering unsolicited emails, the emails in the training set are latently clustered into advertisement campaigns. Uncovering this latent structure allows filtering of future emails with very low false positive rates. In order to model the cluster structure, a Bayesian clustering model for dependent binary features is developed in this thesis. Knowing the clustering of emails into campaigns can also aid in uncovering which emails have been sent on behalf of the same network of captured hosts, so-called botnets. This association of emails to networks is another layer of latent clustering. Uncovering this latent structure allows service providers to further increase the accuracy of email filtering and to effectively defend against distributed denial-of-service attacks. To this end, a discriminative clustering model is derived in this thesis that is based on the graph of observed emails. The partitionings inferred using this model are evaluated through their capacity to predict the campaigns of new emails. Furthermore, when classifying the content of emails, statistical information about the sending server can be valuable. Learning a model that is able to make use of it requires training data that includes server statistics. In order to also use training data where the server statistics are missing, a model that is a mixture over potentially all substitutions thereof is developed. Another application is to predict the navigation behavior of the users of a website. Here, there is no a priori partitioning of the users into clusters, but to understand different usage scenarios and design different layouts for them, imposing a partitioning is necessary. The presented approach simultaneously optimizes the discriminative as well as the predictive power of the clusters. Each model is evaluated on real-world data and compared to baseline methods. The results show that explicitly modeling the assumptions about the latent cluster structure leads to improved predictions compared to the baselines. It is beneficial to incorporate a small number of hyperparameters that can be tuned to yield the best predictions in cases where the prediction accuracy can not be optimized directly.
Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
Despite recent growth of research on the effects of prosocial media, processes underlying these effects are not well understood. Two studies explored theoretically relevant mediators and moderators of the effects of prosocial media on helping. Study 1 examined associations among prosocial- and violent-media use, empathy, and helping in samples from seven countries. Prosocial-media use was positively associated with helping. This effect was mediated by empathy and was similar across cultures. Study 2 explored longitudinal relations among prosocial-video-game use, violent-video-game use, empathy, and helping in a large sample of Singaporean children and adolescents measured three times across 2 years. Path analyses showed significant longitudinal effects of prosocial- and violent-video-game use on prosocial behavior through empathy. Latent-growth-curve modeling for the 2-year period revealed that change in video-game use significantly affected change in helping, and that this relationship was mediated by change in empathy.
Ecologists urgently need a better ability to predict how environmental change affects biodiversity. We examine individual-based ecology (IBE), a research paradigm that promises better a predictive ability by using individual-based models (IBMs) to represent ecological dynamics as arising from how individuals interact with their environment and with each other. A key advantage of IBMs is that the basis for predictions-fitness maximization by individual organisms-is more general and reliable than the empirical relationships that other models depend on. Case studies illustrate the usefulness and predictive success of long-term IBE programs. The pioneering programs had three phases: conceptualization, implementation, and diversification. Continued validation of models runs throughout these phases. The breakthroughs that make IBE more productive include standards for describing and validating IBMs, improved and standardized theory for individual traits and behavior, software tools, and generalized instead of system-specific IBMs. We provide guidelines for pursuing IBE and a vision for future IBE research.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
Predicting Paris: Multi-Method Approaches to Forecast the Outcomes of Global Climate Negotiations
(2016)
We examine the negotiations held under the auspices of the United Nations Framework Convention of Climate Change in Paris, December 2015. Prior to these negotiations, there was considerable uncertainty about whether an agreement would be reached, particularly given that the world’s leaders failed to do so in the 2009 negotiations held in Copenhagen. Amid this uncertainty, we applied three different methods to predict the outcomes: an expert survey and two negotiation simulation models, namely the Exchange Model and the Predictioneer’s Game. After the event, these predictions were assessed against the coded texts that were agreed in Paris. The evidence suggests that combining experts’ predictions to reach a collective expert prediction makes for significantly more accurate predictions than individual experts’ predictions. The differences in the performance between the two different negotiation simulation models were not statistically significant.
Background:
Endomyocardial biopsy is considered as the gold standard in patients with suspected myocarditis. We aimed to evaluate the impact of bioptic findings on prediction of successful return to work.
Methods:
In 1153 patients (48.9 ± 12.4 years, 66.2% male), who were hospitalized due to symptoms of left heart failure between 2005 and 2012, an endomyocardial biopsy was performed. Routine clinical and laboratory data, sociodemographic parameters, and noninvasive and invasive cardiac variables including endomyocardial biopsy were registered. Data were linked with return to work data from the German statutory pension insurance program and analyzed by Cox regression.
Results:
A total of 220 patients had a complete data set of hospital and insurance information. Three quarters of patients were virus-positive (54.2% parvovirus B19, other or mixed infection 16.7%). Mean invasive left ventricular ejection fraction was 47.1% ± 18.6% (left ventricular ejection fraction <45% in 46.3%). Return to work was achieved after a mean interval of 168.8 ± 347.7 days in 220 patients (after 6, 12, and 24 months in 61.3%, 72.2%, and 76.4%). In multivariate regression analysis, only age (per 10 years, hazard ratio, 1.27; 95% confidence interval, 1.10–1.46; p = 0.001) and left ventricular ejection fraction (per 5% increase, hazard ratio, 1.07; 95% confidence interval, 1.03–1.12; p = 0.002) were associated with increased, elevated work intensity (heavy vs light, congestive heart failure, 0.58; 95% confidence interval, 0.34–0.99; p < 0.049) with decreased probability of return to work. None of the endomyocardial biopsy–derived parameters was significantly associated with return to work in the total group as well as in the subgroup of patients with biopsy-proven myocarditis.
Conclusion:
Added to established predictors, bioptic data demonstrated no additional impact for return to work probability. Thus, socio-medical evaluation of patients with suspected myocarditis furthermore remains an individually oriented process based primarily on clinical and functional parameters.
The Limpopo Basin in southern Africa is prone to droughts which affect the livelihood of millions of people in South Africa, Botswana, Zimbabwe and Mozambique. Seasonal drought early warning is thus vital for the whole region. In this study, the predictability of hydrological droughts during the main runoff period from December to May is assessed using statistical approaches. Three methods (multiple linear models, artificial neural networks, random forest regression trees) are compared in terms of their ability to forecast streamflow with up to 12 months of lead time. The following four main findings result from the study.
1. There are stations in the basin at which standardised streamflow is predictable with lead times up to 12 months. The results show high inter-station differences of forecast skill but reach a coefficient of determination as high as 0.73 (cross validated).
2. A large range of potential predictors is considered in this study, comprising well-established climate indices, customised teleconnection indices derived from sea surface temperatures and antecedent streamflow as a proxy of catchment conditions. El Nino and customised indices, representing sea surface temperature in the Atlantic and Indian oceans, prove to be important teleconnection predictors for the region. Antecedent streamflow is a strong predictor in small catchments (with median 42% explained variance), whereas teleconnections exert a stronger influence in large catchments.
3. Multiple linear models show the best forecast skill in this study and the greatest robustness compared to artificial neural networks and random forest regression trees, despite their capabilities to represent nonlinear relationships.
4. Employed in early warning, the models can be used to forecast a specific drought level. Even if the coefficient of determination is low, the forecast models have a skill better than a climatological forecast, which is shown by analysis of receiver operating characteristics (ROCs). Seasonal statistical forecasts in the Limpopo show promising results, and thus it is recommended to employ them as complementary to existing forecasts in order to strengthen preparedness for droughts.
Background:
Endomyocardial biopsy is considered as the gold standard in patients with suspected myocarditis. We aimed to evaluate the impact of bioptic findings on prediction of successful return to work.
Methods:
In 1153 patients (48.9 ± 12.4 years, 66.2% male), who were hospitalized due to symptoms of left heart failure between 2005 and 2012, an endomyocardial biopsy was performed. Routine clinical and laboratory data, sociodemographic parameters, and noninvasive and invasive cardiac variables including endomyocardial biopsy were registered. Data were linked with return to work data from the German statutory pension insurance program and analyzed by Cox regression.
Results:
A total of 220 patients had a complete data set of hospital and insurance information. Three quarters of patients were virus-positive (54.2% parvovirus B19, other or mixed infection 16.7%). Mean invasive left ventricular ejection fraction was 47.1% ± 18.6% (left ventricular ejection fraction <45% in 46.3%). Return to work was achieved after a mean interval of 168.8 ± 347.7 days in 220 patients (after 6, 12, and 24 months in 61.3%, 72.2%, and 76.4%). In multivariate regression analysis, only age (per 10 years, hazard ratio, 1.27; 95% confidence interval, 1.10–1.46; p = 0.001) and left ventricular ejection fraction (per 5% increase, hazard ratio, 1.07; 95% confidence interval, 1.03–1.12; p = 0.002) were associated with increased, elevated work intensity (heavy vs light, congestive heart failure, 0.58; 95% confidence interval, 0.34–0.99; p < 0.049) with decreased probability of return to work. None of the endomyocardial biopsy–derived parameters was significantly associated with return to work in the total group as well as in the subgroup of patients with biopsy-proven myocarditis.
Conclusion:
Added to established predictors, bioptic data demonstrated no additional impact for return to work probability. Thus, socio-medical evaluation of patients with suspected myocarditis furthermore remains an individually oriented process based primarily on clinical and functional parameters.
In the present work, we use symbolic regression for automated modeling of dynamical systems. Symbolic regression is a powerful and general method suitable for data-driven identification of mathematical expressions. In particular, the structure and parameters of those expressions are identified simultaneously.
We consider two main variants of symbolic regression: sparse regression-based and genetic programming-based symbolic regression. Both are applied to identification, prediction and control of dynamical systems.
We introduce a new methodology for the data-driven identification of nonlinear dynamics for systems undergoing abrupt changes. Building on a sparse regression algorithm derived earlier, the model after the change is defined as a minimum update with respect to a reference model of the system identified prior to the change. The technique is successfully exemplified on the chaotic Lorenz system and the van der Pol oscillator. Issues such as computational complexity, robustness against noise and requirements with respect to data volume are investigated.
We show how symbolic regression can be used for time series prediction. Again, issues such as robustness against noise and convergence rate are investigated us- ing the harmonic oscillator as a toy problem. In combination with embedding, we demonstrate the prediction of a propagating front in coupled FitzHugh-Nagumo oscillators. Additionally, we show how we can enhance numerical weather predictions to commercially forecast power production of green energy power plants.
We employ symbolic regression for synchronization control in coupled van der Pol oscillators. Different coupling topologies are investigated. We address issues such as plausibility and stability of the control laws found. The toolkit has been made open source and is used in turbulence control applications.
Genetic programming based symbolic regression is very versatile and can be adapted to many optimization problems. The heuristic-based algorithm allows for cost efficient optimization of complex tasks.
We emphasize the ability of symbolic regression to yield white-box models. In contrast to black-box models, such models are accessible and interpretable which allows the usage of established tool chains.