Refine
Year of publication
- 2023 (9)
- 2022 (45)
- 2021 (97)
- 2020 (92)
- 2019 (108)
- 2018 (124)
- 2017 (132)
- 2016 (70)
- 2015 (87)
- 2014 (90)
- 2013 (84)
- 2012 (85)
- 2011 (75)
- 2010 (42)
- 2009 (66)
- 2008 (74)
- 2007 (56)
- 2006 (74)
- 2005 (96)
- 2004 (62)
- 2003 (70)
- 2002 (67)
- 2001 (48)
- 2000 (79)
- 1999 (86)
- 1998 (59)
- 1997 (50)
- 1996 (61)
- 1995 (44)
- 1994 (24)
- 1993 (12)
- 1992 (11)
- 1991 (4)
Document Type
- Article (1732)
- Doctoral Thesis (149)
- Monograph/Edited Volume (96)
- Postprint (65)
- Other (43)
- Review (43)
- Conference Proceeding (29)
- Preprint (19)
- Master's Thesis (3)
- Bachelor Thesis (2)
Language
Is part of the Bibliography
- yes (2184) (remove)
Keywords
- Eye movements (28)
- eye movements (27)
- embodied cognition (23)
- reading (23)
- attention (18)
- psychotherapy (18)
- Reading (17)
- adolescence (17)
- aggression (17)
- Chinese (16)
Institute
- Department Psychologie (2184) (remove)
Making sense of the world
(2020)
For human infants, the first years after birth are a period of intense exploration-getting to understand their own competencies in interaction with a complex physical and social environment. In contemporary neuroscience, the predictive-processing framework has been proposed as a general working principle of the human brain, the optimization of predictions about the consequences of one's own actions, and sensory inputs from the environment. However, the predictive-processing framework has rarely been applied to infancy research. We argue that a predictive-processing framework may provide a unifying perspective on several phenomena of infant development and learning that may seem unrelated at first sight. These phenomena include statistical learning principles, infants' motor and proprioceptive learning, and infants' basic understanding of their physical and social environment. We discuss how a predictive-processing perspective can advance the understanding of infants' early learning processes in theory, research, and application.
Making sense of the world
(2020)
For human infants, the first years after birth are a period of intense exploration-getting to understand their own competencies in interaction with a complex physical and social environment. In contemporary neuroscience, the predictive-processing framework has been proposed as a general working principle of the human brain, the optimization of predictions about the consequences of one's own actions, and sensory inputs from the environment. However, the predictive-processing framework has rarely been applied to infancy research. We argue that a predictive-processing framework may provide a unifying perspective on several phenomena of infant development and learning that may seem unrelated at first sight. These phenomena include statistical learning principles, infants' motor and proprioceptive learning, and infants' basic understanding of their physical and social environment. We discuss how a predictive-processing perspective can advance the understanding of infants' early learning processes in theory, research, and application.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Analysis of physicians' probability estimates of a medical outcome based on a sequence of events
(2022)
IMPORTANCE
The probability of a conjunction of 2 independent events is the product of the probabilities of the 2 components and therefore cannot exceed the probability of either component; violation of this basic law is called the conjunction fallacy. A common medical decision-making scenario involves estimating the probability of a final outcome resulting from a sequence of independent events; however, little is known about physicians' ability to accurately estimate the overall probability of success in these situations.
OBJECTIVE
To ascertain whether physicians are able to correctly estimate the overall probability of a medical outcome resulting from 2 independent events.
DESIGN, SETTING, AND PARTICIPANTS
This survey study consisted of 3 separate substudies, in which 215 physicians were asked via internet-based survey to estimate the probability of success of each of 2 components of a diagnostic or prognostic sequence as well as the overall probability of success of the 2-step sequence. Substudy 1 was performed from April 2 to 4, 2021, substudy 2 from November 2 toll, 2021, and substudy 3 from May 13 to 19, 2021. All physicians were board certified or board eligible in the primary specialty germane to the substudy (ie, obstetrics and gynecology for substudies land 3 and pulmonology for substudy 2), were recruited from a commercial survey service, and volunteered to participate in the study.
EXPOSURES
Case scenarios presented in an online survey.
MAIN OUTCOMES AND MEASURES
Respondents were asked to provide their demographic information in addition to 3 probability estimates. The first substudy included a scenario describing a brow presentation discovered during labor; the 2 conjuncts were the probabilities that the brow presentation would resolve and that the delivery would be vaginal. The second substudy involved a diagnostic evaluation of an incidentally discovered pulmonary nodule; the 2 conjuncts were the probabilities that the patient had a malignant condition and that a technically successful transthoracic needle biopsy would reveal a malignant condition. The third substudy included a modification of the first substudy in an attempt to debias the conjunction fallacy prevalent in the first substudy. Respondents' own probability estimates of the individual events were used to calculate the mathematically correct conjunctive probability.
RESULTS
Among 215 respondents, the mean (SD) age was 54.0 (9.5) years; 142 respondents (66.0%) were male. Data on race and ethnicity were not collected. A total of 168 physicians (78.1%) estimated the probability of the 2-step sequence to be greater than the probability of at least 1 of the 2 component events. Compared with the product of their 2 estimated components, respondents overestimated the combined probability by 12.8% (95% CI, 9.6%-16.1%; P < .001) in substudy 1, 19.8% (95% Cl, 16.6%-23.0%; P < .001) in substudy 2, and 18.0% (95% CI, 13.4%-22.5%; P < .001) in substudy 3, results that were mathematically incoherent (ie, formally illogical and mathematically incorrect).
CONCLUSIONS AND RELEVANCE
In this survey study of 215 physicians, respondents consistently overestimated the combined probability of 2 events compared with the probability calculated from their own estimates of the individual events. This biased estimation, consistent with the conjunction fallacy, may have substantial implications for diagnostic and prognostic decision-making.
The presence of task-irrelevant sound disrupts short-term memory for serial information. Recent studies found that enhanced perceptual task-encoding load (static visual noise added to target items) reduces the disruptive effect of an auditory deviant but does not affect the task-specific interference by changing-state sound, indicating that the deviation effect may be more susceptible to attentional control. This study aimed to further specify the role of attentional control in shielding against different types of auditory distraction, examining speech and nonspeech distractors presented in laboratory and Web based experiments. To further elucidate the role of controlled processes, we tested whether the detrimental effects of distractor sounds-and their modulation by attentional control-reach participants' awareness. We found that changing-state sound and auditory deviants in steady-state sound equally affected both objective recall performance and metacognitive confidence judgments but did not affect the accuracy of confidence judgments. Most importantly, across four experiments, an increase of task load (visual degradation of the to-be-remembered items) did not reduce either type of auditory distraction. A close replication of the original modulation of the deviation effect by perceptual task load (in an online environment) even revealed a stronger deviation effect at high task load, suggesting that the manipulation may have influenced cognitive load and the ability to control distractor interference in memory. In line with a unitary account of auditory distraction, the results suggest that although both types of distraction reach metacognitive awareness, they may be equally unrelated to perceptual load and the availability of attentional resources. <br /> Public Significance Statement Our ability to hold information in short-term memory suffers in the presence of background sound, but it is unclear to what extent auditory distraction depends on attentional control and metacognitive monitoring. This study reassessed a finding, whereby the diversion of attention by deviant sounds is reduced when the focal task becomes more difficult to process (via perceptual degradation). A series of experiments showed that both the effect of auditory deviants and the interference by changing-state sound is largely resistant to a manipulation of task load, indicating that distraction is not susceptible to attentional control. Nevertheless, participants appeared to be well aware of the detrimental sound effects on performance, as reflected in metacognitive confidence judgments. The findings have important implications for theoretical accounts of auditory distraction, indicating that disruption is attributable to automatic attentional capture, which cannot be controlled despite us being aware of it.
There is a debate about whether and why we overestimate addition and underestimate subtraction results (Operational Momentum or OM effect). Spatial-attentional accounts of OM compete with a model which postulates that OM reflects a weighted combination of multiple arithmetic heuristics and biases (AHAB). This study addressed this debate with the theoretically diagnostic distinction between zero problems (e.g., 3 + 0, 3 - 0) and non-zero problems (e.g., 2 + 1, 4 - 1) because AHAB, in contrast to all other accounts, uniquely predicts reverse OM for the latter problem type. In two tests (line-length production and time production), participants indeed produced shorter lines and under-estimated time intervals in non-zero additions compared with subtractions. This predicted interaction between operation and problem type extends OM to non-spatial magnitudes and highlights the strength of AHAB regarding different problem types and modalities during the mental manipulation of magnitudes. They also suggest that OM reflects methodological details, whereas reverse OM is the more representative behavioural signature of mental arithmetic.
Keeping the breath in mind
(2021)
Scientific interest in the brain and body interactions has been surging in recent years. One fundamental yet underexplored aspect of brain and body interactions is the link between the respiratory and the nervous systems. In this article, we give an overview of the emerging literature on how respiration modulates neural, cognitive and emotional processes. Moreover, we present a perspective linking respiration to the free-energy principle. We frame volitional modulation of the breath as an active inference mechanism in which sensory evidence is recontextualized to alter interoceptive models. We further propose that respiration-entrained gamma oscillations may reflect the propagation of prediction errors from the sensory level up to cortical regions in order to alter higher level predictions. Accordingly, controlled breathing emerges as an easily accessible tool for emotional, cognitive, and physiological regulation.