Check out the UCR Brain Game Center

Research

A central issue in neuroscience is how the brain selectively adapts to important environmental changes. While the brain needs to adapt to new environments, its architecture has to be protected from modification due to continual bombardment of undesirable information. Our research addresses mechanisms of learning and memory using behavioral, computational and neuroscientific methodologies. We aim to clarify the rules by which the brain solves this so-called stability-plasticity dilemma and apply these rules to induce desirable brain plasticity.

Brain Training

Imagine if you could see better, think more clearly, have improved memory, and even become more intelligent through simple training done on your own computer, smartphone, or tablet. These are the promises of a new generation of "Brain Training" techniques. While extant approaches are controversial in that they "teach to the test" and often fail to transfer to real world activities, advances in modern Psychology, Neuroscience and Computer Science (e.g. interactive software, 3D graphics, artificial intelligence, etc) provide for development of next-generation "brain game" technology that can unlock tremendous benefits to individuals and society as a whole. We've founded the UCR Brain Game Center for Mental Fitness and Wellbeing to research, test, and disseminate evidence-based, scientifically optimized brain fitness games that transfer benefits to real-life activities.

A first example of our approach is a vision training video game that improves the way the brain processes visual information. This vision training approach has led to broad-scale improvements to central and peripheral vision (Deveau, Lovcik, and Seitz, 2014), and after playing this game, the UCR baseball team has measurably better vision, fewer strike-outs, more runs created, and ultimately won 4 to 5 extra games out of a 54 game season (Deveau, Ozer, and Seitz, 2014). The real-world improvements yielded from this neuroscientifically designed game are an exciting demonstration of the potential of vision training and we are currently researching application of vision training in many demographics, including those with low vision (Amblyopia, Macular Degeneration, Glaucoma, etc), mental health conditions (ADHD, Schizophrenia, Dyselexia, etc), and professional seeking superior vision (Athletes, Law Enforcement, Radiologists, etc).

We are currently developing additional brain training games to target Perception (Audition, Vision), Memory, Attention, Executive Function and Language Skills. We use an evidence based research approach with the goal of inducing changes the manifest in the daily activities of the users. A limitation of most brain training approaches is that they emphasize simplistic approaches to target specific mechanisms, often giving rise to learning effects that fail to generalize beyond experimental testing conditions. Instead, we use a novel integrative approach combining multiple techniques used to improve learning, such as training with a diverse set of stimuli, optimized stimulus presentation, multisensory facilitation, motivating tasks, maximizing participant performance-confidence and consistently reinforcing training stimuli, which have individually contributed to increasing the speed, magnitude and generality of learning with the goal of creating training programs that will powerfully generalize to real world tasks. Research projects include memory training in children with ADHD, seniors with cogntive decline, auditory training in veterans with TBI, etc.

Mechanisms of Learning

How do we know what to learn? In particular, what are the rules that the brain uses to determine how it changes through experience. A key focus of our research is to detail these mechanism of learning so that they can be exploited in our brain training studies. Some of these approaches are as follows:

Task Irrelevant Perceptual Learning

We hypothesize that our perceptual systems may dominantly learn information that is important to the observer and that this may be mediated by reinforcement processes in the brain. To test this hypothesis, we temporally-paired a subliminal, task-irrelevant, motion stimulus with a task-target resulted and found sensitivity enhancements for that motion stimulus (Seitz and Watanabe, Nature 2003). These results led to the idea that perceptual learning is gated by the confluence between reinforcement signals and the task-irrelevant feature signals (Seitz and Watanabe, TICS 2005). Confirming this hypothesis, we found perceptual learning can arise from pairing a stimulus with a reward (Seitz, Kim, Watanabe, Neuron 2009).

This research, under the nomenclature of task-irrelevant perceptual learning (TIPL; Seitz and Watanabe, Vision Research 2009), became a major theme of my research and enabled me to demonstrate a complex interplay between mechanisms of attention and reinforcement in perceptual learning (Seitz, et al., Current Biology 2005; Choi, Seitz, Watanabe, Vision Research 2009; Tsushima, Seitz, Watanabe, Current Biology 2009; Franko, Seitz, Vogels J. Cog. Neurosci 2009, etc). Interestingly, TIPL is as strong in magnitude as direct training on the same stimuli (Seitz et al., Cognition 2011), or even stronger when attention gets in the way (Vlahou, Protopapas, & Seitz, JEP General 2012). My work, and that of others, shows that TIPL is a basic mechanism of learning that impacts visual abilities of motion processing (Seitz and Watanabe (et al), Nature 2003, PNAS 2005, Current Biology 2005, etc), orientation processing (Nishina, Seitz, Kawato, & Watanabe, JOV 2007), critical flicker fusion thresholds (Seitz, Nanez, Holloway, & Watanabe, Hum Psychpharm 2005, PLoS ONE 2006), contour integration (Rosenthal & Humphreys, Psych Sci 2010), auditory formant processing (Seitz et al., Cognition 2010), and phonetic processing (Vlahou, Protopapas, & Seitz, JEP General 2012). This research of TIPL, has advanced our understanding of learning systems in the brain, has had a significant impact on the modern field of perceptual learning.

Task Irrelevant Learning in Visual Memory

Recently, we have extended the TIPL paradigm to understand aspects of fast-learning under the guise Visual Memory. In this novel phenomenon of fast task-irrelevant learning (fast-TIL), a single trial pairing an image with a task-target produces enhanced memorization of the target-paired stimuli (Seitz and Leclercq, Vision Research 2011; see also Lin et al PLoS Bio 2010; Swallow and Jiang, Cognition 2010). Fast-TIL further demonstrates the ubiquity of TIL as a learning process in the brain and has provided an extremely useful method to further explore the mechanisms of learning and memory. For example, we have shown that both attentional orienting (Leclercq and Seitz, AP&P 2012), and alerting signals (Leclercq and Seitz, Acta Psychologica 2012) facilitate fast-TIL. Furthermore, there are substantial gender differences; fast-TIL is most consistent in men (Leclercq and Seitz, PLoS ONE 2012), although can be found in women under conditions of uncertainty (Leclercq and Seitz, in submission).

This research of fast-TIL makes a critical link between mechanisms of perceptual learning and memory encoding and the efficiency of the design (single-trial learning rather than the thousands of trials required to achieve TIPL) has allowed for insight that would have been difficult to achieve otherwise. For example, a key finding is that while for TIPL attentional orienting to the learning stimulus had led to a disruption of learning (Choi, Seitz, Watanabe, Vision Research 2009), we find that in fast-TIL attention facilitates learning (Leclercq and Seitz, AP&P 2012). A simple model, which parsimoniously explains much of the learning literature, is that attention and reinforcement play complementary roles in learning. Where reinforcement serves to gate learning, attention alters stimulus representations so as to accentuate aspects of the scene that the user knows to be important and to diminish those thought to be irrelevant.

Statistical Learning

If attention plays a key role in selecting what to learn, how do we learn what we should attend to? One answer is statistical learning in which people implicitly pick up statistical regularities in the environment (Kim, Seitz, Feestra and Shams, Neuroscience Letters, 2009) and that these statistical regularities inform behavior. Recent research, in collaboration with Peggy Series, has addressed how Bayesian Modeling in conjunction with Psychophysical techniques can lead to an understanding of how perceptual priors develop. In (Chalk, Seitz, Series, JOV 2010), we examined whether expectations of simple stimulus features can be developed implicitly through a fast statistical learning procedure (see also Gekas, Seitz et al., 2013). We found participants quickly and automatically developed expectations for the most frequently presented directions of motion, and that this altered their perception of new motion directions, inducing attractive biases in the perceived direction as well as visual hallucinations in the absence of a stimulus. In a follow-up study (Sotiropoulos, Seitz, Series, Current Biology 2010) we linked statistical learning and perceptual learning by showing that changing the expectations of the speed of stimuli alters the perceived motion direction of the stimuli. These findings are important in that they validate previous conjectures that naive subjects expect to see slow speeds, and thus perceive the direction that corresponds to the slowest speed under conditions of uncertainty. Subjects exposed to high-speed stimuli shift their expectations towards higher speeds and thus perceive faster speeds more often. More generally, these findings suggest that expectations that are thought to result from a lifetime of sensory inputs remain plastic, and that the brain is constantly able to revise even its most basic assumptions about the environment. In a further link between statistical learning and perceptual learning, we find that even fast statistical learning can result in sensitivity improvements for statistically reliable stimuli (Barakat, Seitz and Shams, Cognition 2013).

Multisensory Learning

Another key aspect of our research has been how information acquired from different sensory modalities interacts to produce learning. In collaboration with Ladan Shams (UCLA), we have investigated how multisensory information contributes to perceptual learning. These foundational studies showed that that auditory features facilitated learning of visual features (Seitz, Kim, Shams, Current Biology 2006) and that the benefits of multisensory training were specific to training with congruent auditory-visual stimuli (Kim, Seitz, Shams, PLoS ONE 2008); suggesting that the facilitation of learning is not due to a putative alerting effect of sound during training. In other studies, we have used statistical learning methodologies to investigate how associations between stimuli develop within and across sensory modalities and found that unisensory auditory or visual associations develop in parallel and independently of multisensory associations (Seitz et al., Perception 2007). In one of the first review pieces in this burgeoning research area, we consolidated research from studies of perceptual learning, memory, conditioning, and education to put forth a set of competing models of how this facilitation of visual learning takes place (Shams and Seitz, TICS 2008) and how multisensory recalibration and adaptations relates to facilitation (Shams, Wozny, Kim, Seitz, Frontiers in Perceptual Science, 2011).

Interactions Between Different Learning Processes

A key focus of our research regards the fact that learning is not a simple singular process. Even the simplest task learning takes place at multiple levels of complexity. In some cases different aspects of learning appear to be independent (Seitz, Kim, et al., Perception, 2007) and in others they are facilitatory (Seitz, Kim, and Shams, Current Biology, 2008) and in other cases they interfere (Seitz, et al., PNAS, 2005). However, while there has been significant progress in understanding different components of learning, interactions between them are not well understood.

In a first study showing interference in perceptual learning (Seitz et al., PNAS 2005), we showed that learning of a visual hyperacuity stimulus can disrupt the consolidation of previously learned visual stimuli (see also Hung and Seitz, PLoS One 2011). This interference effect was specific to the spatial-locations and orientations of the stimuli and depended on the interval between training the two stimuli. These results showed that perceptual learning has a similar time-course of consolidation as motor learning and synaptic plasticity and implies that motor learning and perceptual learning share the same basic mechanisms of consolidation. To better understand how different components of learning interact, I collaborated with Peggy Seriès (University of Edinburgh) to develop a computational model of perceptual learning (Sotiropoulos, Seitz, & Seriès, Vision Research, 2011). We found that a simple readout model can account for a diversity of findings, such as disruption of learning of one task by practice on a similar task, as well as transfer of learning across tasks and stimulus configurations. The key finding is that disruption typically occurs when trained stimulus features are very similar and learning requires opposing changes in connection weights. These simulations help explain existing results in the literature as well as provide important insights and predictions regarding the reliability of different hyperacuity tasks and stimuli.

In another new research line, we asked how learning of spatial contexts (Contextual Learning) and basic task-features (Feature Learning) develop together in the same task. We found that Contextual Learning and Feature Learning of target and background elements are acquired simultaneously but that these learning effects are behaviorally independent (Le Dantec, Melton and Seitz, Journal of Vision, 2012) and involve dissociated brain processes (Le Dantec and Seitz, in submission). Impressively, we found a very high degree of spatial specificity of learning, which challenges contemporary studies indicating that training multiple spatial locations leads to generalization of learning (Le Dantec and Seitz, Frontiers in Psychology, 2012).

These studies are the first in a broader research plan to understand how different learning process interact in complex tasks.

Brain Imaging

We are currently using fMRI and EEG to understand physiological processes underlying learning and memory. The lab has a dedicated 128 channel Biosemi EEG system and we have collaborations with UCLA and UCSD where we are conducting fMRI research. Existing projects involve understanding neuronal mechanisns of contextual and perpceptual learning, statistical leanring, mutlisensory learning, etc. These tools allow us to better understand how learning occurs in the brain and how to better optimize brain training approahces.

Computational Modeling

Building models is essential to gain deeper insight into the brain mechansims involved in learning. Models have the advantage that they can combine elements of molecular biology, anatomy, physiology, imaging, and psychophysics and make quantitative predictions regarding brain activity and behaviour. A limitation of extant models of learning is they incorporate just one or two mechanisms (e.g. read-out vs representation changes, stimulus enhancement vs noise reduction; supervised vs unsupervised, multisensory vs unisensory, etc) . Furhter, most models use scalar inputs (e.g. angle values) assumed to come from cells tuned for each value of the feature under study and don’t accept raw stimulus images. We are working to move beyond these limitations by building hierchical bayesian models that can incorporate many potential mechanisms of learning and take as inputs the same stimuli that are used in training. Our goal is to elucidate differences in learning mechanisms across training-types, subject groups and individuals. The purpose of this is both to further understanding of basic brain mechanisms of learning, but also how to optimize learning in our interventions.