A central issue in neuroscience is how the brain selectively adapts to important environmental changes. While the brain needs to adapt to new environments, its architecture has to be protected from modification due to continual bombardment of undesirable information. Clarifying how the brain solves this so-called stability-plasticity dilemma in its sensory areas is the primary goal of my research.
Acquisition of Learning
A basic starting question is - how do we know what to learn? That is, how does a neural system know which information is behaviorally relevant and which is not? This question was most famously addressed by studies of conditioning conducted by Pavlov. He discovered that repeated temporal coincidence of an unconditioned stimulus, such as food, and a conditioned stimulus, such as a tone, can result in learning, where an association is formed between the tone and a reaction, such as salivation, which is normally elicited by food.
While conditioning is an influential model for stimulus-response learning, improvements of perceptual abilities in adults (perceptual learning) are classically thought to either arise from attentional selection of behaviorally relevant stimulus properties or to result from mere stimulus exposure. To test the hypothesis that the temporal coincidence of a stimulus and a reward will result in learning of that stimulus, we designed a new study in which a perceptually "invisible" motion stimulus (i.e. not attended by subject) was paired with the letter-target of a rapid serial visual presentation (RSVP) task. We found that learning effects previously thought to be either due to attention, or resulting from mere stimulus exposure, actually result from a reinforcement process, similar to that found in conditioning (Seitz and Watanabe 2003, Nature). A related study shows that this fails during that attentional blink of a target, suggesting that successful recognition of a target is necessary and may serve as an "internal reward" (Seitz et al., 2005, Current Biology). Additionally, when low contrast motion stimuli are used, subjects develop a perceptual bias (i.e. a conditioned visual response) in that they report seeing the paired-direction even when no stimulus is presented in the test (Seitz et al., 2005a, PNAS). This research led to model of perceptual learning where learning results from timely interactions between diffusive, reward-related, learning signals and bottom-up stimulus signals (see Seitz and Watanabe, 2005, TICS).
In our latest work on this topic we have looked directly at the effect of reward on sensory learning by using a liquid reinforcement paradigm that allowed us to take the "task" out of perceptual learning and to examine the specific hypothesis that reward-related learning signals are sufficient to cause improvements in visual sensitivity (Seitz, Kim & Watanabe, submitted). In this study is that the participants were given no task during training. Instead, they were instructed to passively view the computer monitor, maintain gaze on a central fixation spot, and to enjoy the occasional drop of water that was delivered through a tube that was placed in their mouths. We found that subjects showed perceptual learning for the orientation stimulus that was paired in a consistent temporal relationship (i.e. preceded and partially overlapped) with the reward (i.e. the drop of water). This result was repeated in a condition in which a binocular suppression technique was used in order to render the orientation stimuli imperceptible throughout the 20-day training period. Furthermore, learning was found to be ocular specific (ie only present for stimuli presented to the eye in which the stimuli were trained and not the other eye).These results support the proposal that stimulus reinforcement pairing enhances responses of human visual cortex selectively for the paired stimulus and leads to plasticity in monocular visual processing stages.
Consolidation of Learning
In addition to investigations of the neural mechanisms that allow for learning, we have conducted research studying mechanisms involved in the stabilization and consolidation of learning. In this series of studies we investigate if perceptual learning of a given stimulus feature can be disrupted by subsequent training with a different visual feature. We have so far demonstrated that perceptual learning of a hyperacuity stimulus can be disrupted when a second hyperacuity stimulus is learned, although a delay of an hour or more between the training sessions will ameliorate the effects of disruption. This interference is highly specific to particular features of the second hyperacuity stimulus and only occurs when retinotopic location and stimulus orientations of the two trained stimuli are matched (Seitz et al, 2005b, PNAS ).
Another important line of research is regarding how multisensory interactions can be learned and play a role in learning. This research is in light of accumulating reports of crossmodal interactions in various settings, which show that interactions between modalities are the rule rather than the exception in human processing of sensory information. In fact our first investigation with a multisensory learning paradigm revealed that the presence of auditory features facilitated the learning of visual features (Seitz, Kim & Shams, 2006, Current Biology). We are following up on this early investigation to better understand what types of multisensory interactions are most productive to learning and how new multisensory associations can be learned. To investigate multisensory associations, we are using techniques of statistical learning, a fast learning paradigm where new associations develop after only a few minutes of exposure. Initial results indicate that unisensory auditory or visual associations can develop in parallel and independently of multisensory associations (Seitz et al., 2007, Perception).
This research is focused on combining elements of molecular biology, anatomy, physiology, and other aspects of developmental neuroscience into a computational model of how the cortex develops. My goal is to create a framework by which an arbitrary brain area can develop by modeling the nature of its spontaneous inputs, sequential order of laminar development, order of the development of different brain regions, and activity- dependant learning laws. The hypothesis is that much of the cortex is similar early in development and that differences in the input drive area specialization (protomap). Aspects of innate cortical specification (protocortex) are modeled as different initial states, learning laws, etc., without the necessity of large-scale hand-wiring. A core issue in my present project is to understand how the subplate helps to coordinate development within multiple cortical layers, so that they form consistent receptive field profiles.