Refine
Has Fulltext
- no (6)
Year of publication
- 2020 (6) (remove)
Document Type
- Article (6) (remove)
Language
- English (6)
Is part of the Bibliography
- yes (6)
Keywords
- embodied cognition (2)
- hemispheric asymmetry (2)
- spatial frequencies (2)
- spatial-numerical associations (2)
- Chinese (1)
- SNARC (1)
- SNARC effect (1)
- asymmetry (1)
- central and peripheral (1)
- color (1)
Institute
Commentary
(2020)
In two eye-tracking experiments, we investigated the processing of information about phonological consistency of Chinese phonograms during sentence reading. In Experiment 1, we adopted the error disruption paradigm in silent reading and found significant effects of phonological consistency and homophony in the foveal vision, but only in a late processing stage. Adding oral reading to Experiment 2, we found both effects shifted to earlier indices of parafoveal processing. Specifically, low-consistency characters led to a better homophonic foveal recovery effect in Experiment 1 and stronger homophonic preview benefits in Experiment 2. These findings suggest that phonological consistency information can be obtained during sentence reading, and compared to the low-consistency previews the high-consistency previews are processed faster, which leads to greater interference to the recognition of target characters.
"Left" and "right" coordinates control our spatial behavior and even influence abstract thoughts. For number concepts, horizontal spatial-numerical associations (SNAs) have been widely documented: we associate few with left and many with right. Importantly, increments are universally coded on the right side even in preverbal humans and nonhuman animals, thus questioning the fundamental role of directional cultural habits, such as reading or finger counting. Here, we propose a biological, nonnumerical mechanism for the origin of SNAs on the basis of asymmetric tuning of animal brains for different spatial frequencies (SFs). The resulting selective visual processing predicts both universal SNAs and their context-dependence. We support our proposal by analyzing the stimuli used to document SNAs in newborns for their SF content. As predicted, the SFs contained in visual patterns with few versus many elements preferentially engage right versus left brain hemispheres, respectively, thus predicting left-versus rightward behavioral biases. Our "brain's asymmetric frequency tuning" hypothesis explains the perceptual origin of horizontal SNAs for nonsymbolic visual numerosities and might be extensible to the auditory domain.
To construct a coherent multi-modal percept, vertebrate brains extract low-level features (such as spatial and temporal frequencies) from incoming sensory signals. However, because frequency processing is lateralized with the right hemisphere favouring low frequencies while the left favours higher frequencies, this introduces asymmetries between the hemispheres. Here, we describe how this lateralization shapes the development of several cognitive domains, ranging from visuo-spatial and numerical cognition to language, social cognition, and even aesthetic appreciation, and leads to the emergence of asymmetries in behaviour. We discuss the neuropsychological and educational implications of these emergent asymmetries and suggest future research approaches.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
How is semantic information in the mental lexicon accessed and selected during reading? Readers process information of both the foveal and parafoveal words. Recent eye-tracking studies hint at bi-phasic lexical activation dynamics, demonstrating that semantically related parafoveal previews can either facilitate, or interfere with lexical processing of target words in comparison to unrelated previews, with the size and direction of the effect depending on exposure time to parafoveal previews. However, evidence to date is only correlational, because exposure time was determined by participants' pre-target fixation durations. Here we experimentally controlled parafoveal preview exposure duration using a combination of the gaze-contingent fast-priming and boundary paradigms. We manipulated preview duration and examined the time course of parafoveal semantic activation during the oral reading of Chinese sentences in three experiments. Semantic previews led to faster lexical access of target words than unrelated previews only when the previews were presented briefly (80 ms in Experiments 1 and 3). Longer exposure time (100 ms or 150 ms) eliminated semantic preview effects, and full preview without duration limit resulted in preview cost, i.e., a reversal of preview benefit. Our results indicate that high-level semantic information can be obtained from parafoveal words and the size and direction of the parafoveal semantic effect depends on the level of lexical activation.