Ongoing Research

"Computational Methods for the Study of American Sign Language Nonmanuals Using Very Large Databases", National Institutes of Health R01-DC-014498.

Our hypothesis is that nonmanual facial articulations perform significant semantic and syntactic functions by means of a more extensive set of facial expressions than that seen in other communicative systems (e.g., speech and emotion). We will design computer algorithms that allow us to automatically (i.e., without the need of any human intervention) detect the face, its facial features and movements of the facial muscles. This will allow us to study our hypothesis from very large databases of ASL nonmanuals.

"Neural Mechanisms Underlying the Visual Recognition of Intent," Human Frontier Science Program.

While many individual aspects of visual perception, mirror neurons, and theory of mind have been investigated before, the exact neural and computational mechanisms for the visual analysis of social behavior and intent have not been systematically studied. The research in this program includes computational, behavioral and imaging studies.

"The Development of Categorization", National Institutes of Health R01-HD-078545-02. (Sloutsky, PI)

The goal of these studies is to advance our understanding of the development and mechanism of categorization – a fundamental component of human intelligence. Our research is based on the hypothesis of multiple mechanisms sub-serving category learning: (a) an early developing mechanism that is based on distributed attention and learning of within-category statistics and (b) later developing mechanism that is based on selective attention to category-relevant information.

"A Study of the Computational Space of Facial Expressions of Emotion", National Institutes of Health R01-EY-020834.

Past research has been very successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. These facial expressions of emotion are then interpreted by our visual system. Yet, little is known about how these facial expressions are recognized. The overarching goal of this proposal is to define the form and dimensions of the computational space used in this visual recognition. In particular, this research program applies a computational model to specify how facial expressions of emotion are recognized in the visual system. The form and dimensions of the cognitive space used for this interpretive ability will be defined. The limits and variability within different groups will be studied and reported.

We are currently recruiting participants for this research.

Current Experiments

Emotion Processing in PTSD and ASD: Finally Recruiting!

This project aims to determine how individuals diagnosed with psychiatric disorders recognize facial expressions of emotion compared to the typical population. In this research, we measure participants' abilities to categorize different facial expressions of emotion and record brain responses to these faces. This will help us to better understand the brain regions affected by each disorder and also help us create tools that could lead to an earlier diagnosis. When this research is finished, we hope to have a new tool for diagnosis that can be easily encorporated into a doctor or psychiatrist's assessments.

Participant Requirements
  • 18 yrs or older
  • Normal or corrected to normal visual acuity
  • Diagnosed with with post-traumatic stress disorder OR autism sprectrum disorder (including Asperger's)
  • No traumatic brain injury

Contact Qianli Feng at feng.559@osu.edu for information on participation.

Neural representations of facial expressions of emotion

In this project we examine how different parts of the brain represent emotional expressions. Participants will be asked to lie in the MRI scanner and respond to pictures of faces viewed on a computer display.

Participant Requirements
  • 18 yrs or older
  • Normal or corrected to normal visual acuity
  • No metal in the body
  • No claustrophia
  • No pregnancy

Contact Qianli Feng at feng.559@osu.edu for information on participate.