ORCID
https://orcid.org/0000-0003-1762-2829
Date of Award
12-18-2024
Degree Name
Doctor of Philosophy (PhD)
Degree Type
Dissertation
Abstract
Visual recognition, a key perceptual process, enables primates to identify faces, food, tools, and potential threats in their environment. Cortical neurons along the ventral visual stream—spanning from early visual cortex (V1) through inferotemporal cortex (IT), with long-range projections to ventrolateral prefrontal cortex (vlPFC)—are essential for visual recognition. Traditional models propose that neuronal representations increase in complexity in each stage of this pathway, culminating in object-specific representations in higher-order cortex. However, this hypothesis remains debated, particularly as artificial visual models have countered that individual neurons along this pathway may function more as filters contributing to distributed representations of objects. Thus, this work uses in vivo neurophysiology, classic visual paradigms, and cutting-edge neuron-guided image synthesis to extract and analyze direct samples of the neural code for vision (“prototypes”) across cortical areas. Results reveal that visual neurons throughout V1, V4, IT, and vlPFC encode filter-like representations of learned visual statistics, without no significant increase in complexity across the ventral stream. My findings suggest that, rather than building in complexity, visual representations across cortical areas—from V1 to vlPFC—exhibit an intermediate complexity that presumably titrates richness of visual detail with a tendency to avoid overfitting. While prototype complexity remained consistent across the ventral stream, the specificity of encoded visual features increased progressively, with later areas encoding less common—but not more complex—motifs. Though the specificity of encoded visual motifs did increase from V1 to vlPFC, I found no evidence to support they eventually culminate in object-specific representations. These findings support a filter-based model of visual recognition in which a large-scale network of cortical neurons encode statistical features common across natural scenes, but not specific to particular semantic categories like “faces.” Importantly, visually responsive vlPFC neurons demonstrated stable, filter-like representations comparable to those observed in occipitotemporal cortex, marking the first direct evidence of low-level visual coding in prefrontal cortex. In successfully documenting low-level visual representations in vlPFC, this work opens new avenues for exploring the broader network of brain regions putatively involved in visual recognition and related processes, such as amygdala, pulvinar, or orbitofrontal cortex; further, this thesis provides an experimental framework to comprehensively define visual encoding properties of neurons in these regions.
Language
English (en)
Chair and Committee
Carlos Ponce
Committee Members
Daniel Kerschensteiner; David Van Essen; Ilya Monosov; Talia Konkle
Recommended Citation
Bockler, Olivia Rose, "A Filter-Based Model of Visual Recognition" (2024). Arts & Sciences Electronic Theses and Dissertations. 3362.
https://openscholarship.wustl.edu/art_sci_etds/3362