• search hit 2 of 2
Back to Result List

An image-computable psychophysical spatial vision model

  • A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters toA large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Heiko Herbert SchuttORCiDGND, Felix A. WichmannORCiD
DOI:https://doi.org/10.1167/17.12.12
ISSN:1534-7362
Pubmed ID:https://pubmed.ncbi.nlm.nih.gov/29053781
Title of parent work (English):Journal of vision
Publisher:Association for Research in Vision and Opthalmology
Place of publishing:Rockville
Publication type:Article
Language:English
Year of first publication:2017
Publication year:2017
Release date:2020/04/20
Tag:image-computable; model; psychophysics; spatial vision
Volume:17
Number of pages:35
Funding institution:German Federal Ministry of Education and Research (BMBF) through the Bernstein Computational Neuroscience Program Tubingen [FKZ: OGQ1002]; Deutsche Forschungsgemeinschaft [WI 2103/4-1]
Peer review:Referiert
Institution name at the time of the publication:Humanwissenschaftliche Fakultät / Exzellenzbereich Kognitionswissenschaften
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.