Filtern
Volltext vorhanden
- nein (20) (entfernen)
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (20) (entfernen)
Gehört zur Bibliographie
- ja (20)
Schlagworte
- prediction (20) (entfernen)
Institut
- Department Psychologie (4)
- Institut für Biochemie und Biologie (3)
- Department Linguistik (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Mathematik (2)
- Department für Inklusionspädagogik (1)
- Hochschulambulanz (1)
- Institut für Ernährungswissenschaft (1)
- Institut für Geowissenschaften (1)
- Institut für Physik und Astronomie (1)
Previous research has found that comprehenders sometimes predict information that is grammatically unlicensed by sentence constraints. An open question is why such grammatically unlicensed predictions occur. We examined the possibility that unlicensed predictions arise in situations of information conflict, for instance when comprehenders try to predict upcoming words while simultaneously building dependencies with previously encountered elements in memory.
German possessive pronouns are a good testing ground for this hypothesis because they encode two grammatically distinct agreement dependencies: a retrospective one between the possessive and its previously mentioned referent, and a prospective one between the possessive and its following nominal head. In two visual world eye-tracking experiments, we estimated the onset of predictive effects in participants' fixations.
The results showed that the retrospective dependency affected resolution of the prospective dependency by shifting the onset of predictive effects.
We attribute this effect to an interaction between predictive and memory retrieval processes.
Language processing requires memory retrieval to integrate current input with previous context and making predictions about upcoming input. We propose that prediction and retrieval are two sides of the same coin, i.e. functionally the same, as they both activate memory representations. Under this assumption, memory retrieval and prediction should interact: Retrieval interference can only occur at a word that triggers retrieval and a fully predicted word would not do that. The present study investigated the proposed interaction with event-related potentials (ERPs) during the processing of sentence pairs in German. Predictability was measured via cloze probability. Memory retrieval was manipulated via the position of a distractor inducing proactive or retroactive similarity-based interference. Linear mixed model analyses provided evidence for the hypothesised interaction in a broadly distributed negativity, which we discuss in relation to the interference ERP literature. Our finding supports the proposal that memory retrieval and prediction are functionally the same.
Background and objectives
AKI treated with dialysis initiation is a common complication of coronavirus disease 2019 (COVID-19) among hospitalized patients. However, dialysis supplies and personnel are often limited.
Design, setting, participants, & measurements
Using data from adult patients hospitalized with COVID-19 from five hospitals from theMount Sinai Health System who were admitted between March 10 and December 26, 2020, we developed and validated several models (logistic regression, Least Absolute Shrinkage and Selection Operator (LASSO), random forest, and eXtreme GradientBoosting [XGBoost; with and without imputation]) for predicting treatment with dialysis or death at various time horizons (1, 3, 5, and 7 days) after hospital admission. Patients admitted to theMount Sinai Hospital were used for internal validation, whereas the other hospitals formed part of the external validation cohort. Features included demographics, comorbidities, and laboratory and vital signs within 12 hours of hospital admission.
Results
A total of 6093 patients (2442 in training and 3651 in external validation) were included in the final cohort. Of the different modeling approaches used, XGBoost without imputation had the highest area under the receiver operating characteristic (AUROC) curve on internal validation (range of 0.93-0.98) and area under the precisionrecall curve (AUPRC; range of 0.78-0.82) for all time points. XGBoost without imputation also had the highest test parameters on external validation (AUROC range of 0.85-0.87, and AUPRC range of 0.27-0.54) across all time windows. XGBoost without imputation outperformed all models with higher precision and recall (mean difference in AUROC of 0.04; mean difference in AUPRC of 0.15). Features of creatinine, BUN, and red cell distribution width were major drivers of the model's prediction.
Conclusions
An XGBoost model without imputation for prediction of a composite outcome of either death or dialysis in patients positive for COVID-19 had the best performance, as compared with standard and other machine learning models.
Background:
COVID-19 has infected millions of people worldwide and is responsible for several hundred thousand fatalities. The COVID-19 pandemic has necessitated thoughtful resource allocation and early identification of high-risk patients. However, effective methods to meet these needs are lacking.
Objective:
The aims of this study were to analyze the electronic health records (EHRs) of patients who tested positive for COVID-19 and were admitted to hospitals in the Mount Sinai Health System in New York City; to develop machine learning models for making predictions about the hospital course of the patients over clinically meaningful time horizons based on patient characteristics at admission; and to assess the performance of these models at multiple hospitals and time points.
Methods:
We used Extreme Gradient Boosting (XGBoost) and baseline comparator models to predict in-hospital mortality and critical events at time windows of 3, 5, 7, and 10 days from admission. Our study population included harmonized EHR data from five hospitals in New York City for 4098 COVID-19-positive patients admitted from March 15 to May 22, 2020. The models were first trained on patients from a single hospital (n=1514) before or on May 1, externally validated on patients from four other hospitals (n=2201) before or on May 1, and prospectively validated on all patients after May 1 (n=383). Finally, we established model interpretability to identify and rank variables that drive model predictions.
Results:
Upon cross-validation, the XGBoost classifier outperformed baseline models, with an area under the receiver operating characteristic curve (AUC-ROC) for mortality of 0.89 at 3 days, 0.85 at 5 and 7 days, and 0.84 at 10 days. XGBoost also performed well for critical event prediction, with an AUC-ROC of 0.80 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. In external validation, XGBoost achieved an AUC-ROC of 0.88 at 3 days, 0.86 at 5 days, 0.86 at 7 days, and 0.84 at 10 days for mortality prediction. Similarly, the unimputed XGBoost model achieved an AUC-ROC of 0.78 at 3 days, 0.79 at 5 days, 0.80 at 7 days, and 0.81 at 10 days. Trends in performance on prospective validation sets were similar. At 7 days, acute kidney injury on admission, elevated LDH, tachypnea, and hyperglycemia were the strongest drivers of critical event prediction, while higher age, anion gap, and C-reactive protein were the strongest drivers of mortality prediction.
Conclusions:
We externally and prospectively trained and validated machine learning models for mortality and critical events for patients with COVID-19 at different time horizons. These models identified at-risk patients and uncovered underlying relationships that predicted outcomes.
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and traitbased models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative view on the functioning of lake ecosystems. We end with a set of specific recommendations that may be of help in the further development of lake ecosystem models.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
Increased N400 amplitudes on indefinite articles (a/an) incompatible with expected nouns have been initially taken as strong evidence for probabilistic pre-activation of phonological word forms, and recently been intensely debated because they have been difficult to replicate. Here, these effects are simulated using a neural network model of sentence comprehension that we previously used to simulate a broad range of empirical N400 effects. The model produces the effects when the cue validity of the articles concerning upcoming noun meaning in the learning environment is high, but fails to produce the effects when the cue validity of the articles is low due to adjectives presented between articles and nouns during training. These simulations provide insight into one of the factors potentially contributing to the small size of the effects in empirical studies and generate predictions for cross-linguistic differences in article induced N400 effects based on articles’ cue validity. The model accounts for article induced N400 effects without assuming pre-activation of word forms, and instead simulates these effects as the stimulus-induced change in a probabilistic representation of meaning corresponding to an implicit semantic prediction error.
Bayesian geomorphology
(2020)
The rapidly growing amount and diversity of data are confronting us more than ever with the need to make informed predictions under uncertainty. The adverse impacts of climate change and natural hazards also motivate our search for reliable predictions. The range of statistical techniques that geomorphologists use to tackle this challenge has been growing, but rarely involves Bayesian methods. Instead, many geomorphic models rely on estimated averages that largely miss out on the variability of form and process. Yet seemingly fixed estimates of channel heads, sediment rating curves or glacier equilibrium lines, for example, are all prone to uncertainties. Neighbouring scientific disciplines such as physics, hydrology or ecology have readily embraced Bayesian methods to fully capture and better explain such uncertainties, as the necessary computational tools have advanced greatly. The aim of this article is to introduce the Bayesian toolkit to scientists concerned with Earth surface processes and landforms, and to show how geomorphic models might benefit from probabilistic concepts. I briefly review the use of Bayesian reasoning in geomorphology, and outline the corresponding variants of regression and classification in several worked examples.
We propose a reduced dynamical system describing the coupled evolution of fluid flow and magnetic field at the top of the Earth's core between the years 1900 and 2014. The flow evolution is modeled with a first-order autoregressive process, while the magnetic field obeys the classical frozen flux equation. An ensemble Kalman filter algorithm serves to constrain the dynamics with the geomagnetic field and its secular variation given by the COV-OBS.x1 model. Using a large ensemble with 40,000 members provides meaningful statistics including reliable error estimates. The model highlights two distinct flow scales. Slowly varying large-scale elements include the already documented eccentric gyre. Localized short-lived structures include distinctly ageostophic features like the high-latitude polar jet on the Northern Hemisphere. Comparisons with independent observations of the length-of-day variations not only validate the flow estimates but also suggest an acceleration of the geostrophic flows over the last century. Hindcasting tests show that our model outperforms simpler predictions bases (linear extrapolation and stationary flow). The predictability limit, of about 2,000 years for the magnetic dipole component, is mostly determined by the random fast varying dynamics of the flow and much less by the geomagnetic data quality or lack of small-scale information.
Droughts in tropical South America have an imminent and severe impact on the Amazon rainforest and affect the livelihoods of millions of people. Extremely dry conditions in Amazonia have been previously linked to sea surface temperature (SST) anomalies in the adjacent tropical oceans. Although the sources and impacts of such droughts have been widely studied, establishing reliable multi-year lead statistical forecasts of their occurrence is still an ongoing challenge. Here, we further investigate the relationship between SST and rainfall anomalies using a complex network approach. We identify four ocean regions which exhibit the strongest overall SST correlations with central Amazon rainfall, including two particularly prominent regions in the northern and southern tropical Atlantic. Based on the time-dependent correlation between SST anomalies in these two regions alone, we establish a new early-warning method for droughts in the central Amazon basin and demonstrate its robustness in hindcasting past major drought events with lead-times up to 18 months.