A FEW RESEARCH PROJECTS:

Computational models of attention
w/ Steve Piantadosi, Dick Aslin


Human infants, like immature members of any species, must be highly selective in sampling information from their environment to learn efficiently. Failure to be selective would waste precious computational resources on material that is already known (too simple) or unknowable (too complex). In experiments with 7- and 8-month-olds, the duration of visual attention was measured to sequences of visual and auditory events varying in complexity, as measured by an ideal learner model. Infants’ probability of looking away was greatest when the information content (negative log probability) of an event was either too high or too low, according to the model. These results suggest a broad principle of infant attention: infants actively seek to maintain intermediate rates of information absorption, and avoid wasting cognitive resources on overly predictable or overly complex events.

Papers: [Kidd, Piantadosi, & Aslin 2012, PLoS ONE] [KPA 2010, CogSci]

Video: [ABC News] [CBS News]

Podcast: [Scientific American 60-Sec. Science]

Print media: [The New York Times]

Toddlers’ use of disfluent speech

w/ Katherine White, Dick Aslin


Adults tend to produce speech disfluencies (e.g., “uh” and “um”) before words that are infrequent or new to the discourse. Thus disfluencies could provide young children with a cue to the identity of the speaker’s upcoming referent. We use eye-tracking and corpus studies to investigate whether one- and two-year-olds can make use of the information speech disfluencies contain in order to pragmatically infer what referent a speaker will refer to. Results suggest that a speech disfluency biases two-year-old children to expect a novel, previously unmentioned referent. 

Papers: [Kidd, White, & Aslin 2011, Dev. Sci.] [Kidd, White, & Aslin 2009, CogSci]

Chapter: [Kidd, White, & Aslin 2011 in Experience, Variation, & Generalization]

Podcasts: [BBC’s The Naked Scientists] [Scientific American 60-Sec. Mind]

Print media: [MSNBC] [New Scientist] [Slate]

Decision-making in young children
w/ Holly Palmeri, Dick Aslin


Human decision-making is limited by time, the amount of information available to the decider, and the decider's limited cognitive information-processing resources. We examine how these limitations—which are quite different for young children than for adults—affect how young children make decisions. The research employs a variety of methods (behavioral tasks, eye-tracking, computational modeling) in an attempt to provide converging evidence for the utilization and robustness of rapid statistical learning mechanisms in the process of rational decision-making.

Paper: [Kidd, Palmeri, & Aslin 2013, Cognition] *NEW*

Video: [CBS San Francisco]

Podcast: [Scientific American 60-Sec. Mind]

Print media: [Washington Post] [Mother Jones]

Visual attention in word learning
w/ Dick Aslin, and an amazing team of video coders


Classic experimental work has identified many information sources infants may employ in determining the likelihood of possible word-object pairings: for example, eye-gaze, pointing, and other gestural cues. Building upon this work, and using a new method for directing measuring infants' attention (visual fixations) directly, we build a computational model to asses to what degree infants' attention to known social cues is moderated by the information content of those cues. We employ head-mounted cameras and eye-trackers worn by both the infant and the parent during play to quantitatively evaluate how infants' and parents allocate their attention during social interactions. We then assess the degree to which infants’ attention to known social cues is moderated by the information content of those cues using Bayesian computational methods.

Information-seeking in non-humans
w/ Tommy Blanchard, Ben Hayden, Dick Aslin, and Dan Weiss


Non-human learners, much like human infants, must be selective when deciding what stimuli in their environment are best to attend to. We explore information-seeking behavior in non-human learners through two collaborations. The first examines the visual preferences during free-viewing, while the second investigates the relative value of more or less surprising information.


Photos: J. Adam Fenster