|
|||
ResearchOverview of the LabThe Computational Cognition and Perception Lab uses experimental and computational methodologies to study human learning and decision making in cognitive, perceptual, and motor domains. How is new information—whether its a friend's new phone number or a newly recognized visual feature that distinguishes the appearances of identical twins—acquired and how does it become established in memory? How is perceptual learning similar or different from cognitive learning and motor learning? How do people reason about environments with complex temporal dynamics? How do they choose actions in these environments so as to achieve a goal? To what extent are people's behaviors consistent with the optimal behaviors of computational models based on Bayesian statistics? Our research lab addresses these and other questions. We are currently seeking new graduate students and postdoctoral fellows to join our lab. If you're interested, please contact . Research Data SetsSee & Grasp Data SetThe See & Grasp data set contains both visual and haptic features for a set of objects known as Fribbles. It is described in Yildirim & Jacobs (2013), and can be accessed from here. Selected Research ProjectsPerceptual Learning Based on Sensory RedundancyPerceptual environments are highly redundant. People obtain information from many sensory modalities, including vision, audition, and touch. Individual modalities also contain multiple information sources. Visual environments, for instance, give rise to many visual cues, including motion, texture, and shading. We've been interested in how people take advantage of sensory redundancy for the purposes of perceptual learning. For example, a person might (unconsciously) notice that visual cues A and B provide consistent information about the shape of an object whereas cue C indicates a different shape. If so, then the person might conclude that cues A and B are reliable information sources, but that cue C is less reliable. The person can then adapt his or her sensory integration rule so as to place greater weight on the information based on cues A and B, and less weight on the information based on cue C. A person who learns to adapt his or her perceptual system as described here would be behaving in a manner consistent with an optimal statistical model based on Bayesian statistics known as an Ideal Observer.
Effective Perceptual Learning Training Procedures
It has often been noticed that motor training in adults often leads to large and robust learning effects whereas visual training often leads to relatively small learning effects. To date, it is not known if the limited effects of visual training in adults are due to inherent properties of our visual systems or, perhaps, due to our immature knowledge of how to optimally train people to improve their perceptual abilities. It may be the case that if we learn more about how to train people's visual systems, then visual therapy in the near future could be as effective as physical therapy is today. Our lab is studying new ways of training people to perform perceptual tasks. One hypothesis that we are examining is that people will learn best when they are in environments that are as natural as possible. For example, people should not only be able to see objects, but also to touch and move objects. Sophisticated virtual reality equipment allows us to create experimental environments that allow people to see, touch, and hear "virtual" objects. The University of Rochester has fantastic (and highly unusual) virtual reality equipment. The figure on the top left shows a subject using the "visual-haptic" virtual reality environment. In this environment, a subject can both see and touch objects. The figure on the bottom left shows a subject using the "large field of view" virtual reality environment. Subjects in this environment are presented with visual stimuli spanning a full 180 degrees. Perceptual ExpertiseNearly all adults are visual experts in the sense that we are able to infer the structures of three-dimensional scenes with an astonishing degree of accuracy based on two-dimensional images of those scenes projected on our retinas. Moreover, within limited sub-domains such as visual face recognition, nearly all of us are able to discriminate and identify objects whose retinal images are highly similar. As remarkable as our visual abilities are, perhaps more impressive are the abilities of special populations of individuals trained to make specific visual judgments which people without such training are unable to make. For example, radiologists are trained to visually distinguish breast tumors from other tissue by viewing mammograms, geologists are trained to visually recognize different types of rock samples, astronomers are trained to visually interpret stellar spectrograms, and biologists are trained to visually identify different cell structures. What is it that experts in a visual domain know that novices don't know? To address this question, we've been training people to discriminate visual objects with very similar appearances. For example, the image below
shows a sequence of objects that gradually morph from one shape to another. People initially find it difficult to distinguish similar shapes, but
rapidly learn to do so with modest amounts of training. Dynamic Decision Making
An example of a type of task that we have used in our experiments is illustrated on the left. There exists an object which can move along the horizontal dimension. The subject applies forces to the object using the computer mouse to move the object left or right. The subject's goal is to apply a sequence of forces so that the object moves to the target region as quickly as possible and stays in this region for as long as possible. It's a difficult task because small forces might move the object a small amount toward the target whereas large forces might cause the object to overshoot the target. Computational Models Known as Ideal Observers and Ideal Actors
One way in which we create Ideal Observers or Ideal Actors is through the use of Bayesian Networks. An example of a Bayesian Network is illustrated on the right. Bayesian Networks are useful for defining the relationships among a set of random variables, such as the relationships among scene variables (describing what is in an environment) and sensory variables (describing what is in a retinal image, for example). Cognitive Models and Machine Learning
An example of a computational model that we and our colleagues developed is illustrated on the left. Its known as a hierarchical mixtures of experts. We were interested in the optimal organization of a cognitive architecture. In particular, we wanted to know whether its preferable to have a highly modular computational system in which different modules perform different tasks or if its preferable to have a single monolithic system that performs all tasks. The model consists of multiple learning devices, such as multiple artificial neural networks. The model's learning algorithm uses a competitive learning scheme to adaptively partition its training data so that different devices learn different subsets of data items. When different devices learn from different data sets, they acquire different functions. Hence, the model acquires functional specializations during the course of learning. |
|||