PhD, University of California Berkeley/San Francisco,
1992 Chair, Brain & Cognitive Sciences Professor, Brain & Cognitive Sciences, Biomedical Engineering, Neurobiology & Anatomy,
Center for Navigation & Communication Sciences, and Center for Visual Science
My laboratory studies cortical circuits that mediate visual perception and visually guided behavior. This work involves a
creative fusion of the disciplines of neurophysiology, psychology, and computation. Monkeys are trained to perform demanding
discrimination tasks, and we record from single or multiple neurons in visual cortex during performance of these tasks. This
allows us to directly compare the ability of neurons to discriminate between different sensory stimuli with the ability of the
behaving animal to make the same discrimination. This approach also allows us to examine the relationship between neuronal
activity and perceptual decisions (independent of the physical stimulus). In addition, the techniques of electrical
microstimulation and/or reversible inactivation are used to establish causal links between physiology and behavior. Computational
modeling plays an important role in interpreting results and guiding future experimentation.
Our research currently has two main foci:
Neural mechanisms of depth perception. The image formed on each retina is a
two-dimensional projection of the three-dimensional (3D) world. Objects at different depths project onto slightly disparate
points on the two retinas, and the brain is able to extract these binocular disparities from the retinal images and construct
a vivid sensation of depth. My lab studies the mechanisms by which binocular disparity information is encoded, processed, and
read out by the brain in order to perceive depth and compute 3D surface structure. We are beginning to elucidate the brain
areas that contribute to stereoscopic depth perception under specific task conditions, although much remains to be learned
about this interesting cognitive process. We have also recently discovered a population of neurons that combines visual motion
with eye movement signals to code depth from motion parallax, and future work will focus on how depth cues from disparity and
motion parallax are integrated by neurons.
Sensory integration for self-motion perception. To accurately perceive our own
motion through space, we integrate information from the visual and vestibular systems. Integrating information across different
sensory systems is a fundamental issue in systems neuroscience. Because visual and vestibular signals originate in different
spatial frames of reference and with different temporal dynamics, an interesting set of computations must occur in order for
these cues to be combined perceptually. Using a 3D virtual reality system to provide monkeys with naturalistic combinations of
visual stimuli and inertial motion, we are studying how cortical neurons integrate visual and vestibular signals to compute
one's direction of heading through 3D space. We are testing whether the activity of multi-modal neurons in the cortex can
account for the changes in perceptual performance that occur when visual and vestibular signals are combined synergistically or
placed in conflict. The ultimate goal is to develop a detailed neurobiological account of Bayesian optimal cue integration.
For a complete list of publications (via PubMed), click here
Chen A, DeAngelis GC, Angelaki DE (2011): Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex. Journal of Neuroscience, 31: 12036-12052.
Chen A, DeAngelis GC, Angelaki DE (2011): Convergence of vestibular and visual self-motion signals in an area of the posterior sylvian fissure. Journal of Neuroscience, 31: 11617-11627.
Chen A, DeAngelis GC, Angelaki DE (2011): A comparison of vestibular spatiotemporal tuning in macaque cortical areas PIVC, VIP and MSTd. Journal of Neuroscience, 31: 3082-3094.
Chen A, DeAngelis GC, Angelaki DE (2010): Macaque parieto-insular vestibular cortex: responses to self-motion and optic flow. Journal of Neuroscience, 30: 3022-3042.
Angelaki DE, Gu Y, DeAngelis GC (2009): Multisensory integration: psychophysics, neurophysiology, and computation. Current Opinion in Neurobiology, 19: 452-458.
Chowdhury SA, DeAngelis GC (2008): Fine discrimination training alters the causal contribution of macaque area MT to depth perception. Neuron, 60: 367-377.
Gu Y, DeAngelis GC, Angelaki DE (2007): A functional link between area MSTd and heading perception based on vestibular signals. Nature Neuroscience, 10:1038-47.
Fetsch CR, Wang S, Gu Y, DeAngelis GC, Angelaki DE (2007): Spatial reference frames of visual, vestibular, and multimodal
heading signals in the dorsal subdivision of the medial superior temporal area. Journal of Neuroscience, 27: 700-712.
Uka T, DeAngelis GC (2006): Linking neural representation to function in stereoscopic depth perception: roles of the middle
temporal area in coarse vs. fine disparity discrimination. Journal of Neuroscience, 26:6791-6802.