Welcome to CBCSL!
Our lab focuses on the theoretical aspects of cognitive science, with a particular emphasis on vision, learning and linguistics. We combine computational modelling with functional magnetic resonance imaging (fMRI) and behavioral data to acquire a broader understanding of how the human brain functions. Another emphasis of the lab is in building computational systems that can interact with people in a human-like manner. This involves research on human-computer interaction and computer vision.
The importance of community
Collaboration with the Columbus community is an essential component of our research. We feel it is very important to involve the community and help give back to the community. As such, our research gives the Columbus community a unique opportunity to contribute to science, while also providing new information that may lead to scientific advances within the community. For example, we are interested in studying how in individuals from the Deaf community perceive visual information, particularly facial expressions of emotion and language. In return, our reseach will help us understand how interactions with the world may differ between the Deaf and hearing communities. We are also interested in studying differences in the perception of facial expressions of emotion for individuals diagnosed with post-traumatic stress disorder (PTSD) and Autism Spectrum Disorders (ASDs) relative to neurotypical individuals. This research could provide new tools to aid in the diagnosis of PTSD and ASD.
For more information on our projects, see our Projects page or check out the Featured Project below.
Interested in participating? Check out our Currently Recruiting experiments.
Facial Expressions of Emotion and Language:
We are currently researching computational models for American Sign Language (ASL). ASL is a multichannel language that uses many sources of information, like fingers and hand movement, body position, facial expressions among others. Our current project develops a technology that automatically detects the movement of the head and the main features of the face (eyebrows, eyes, nose, mouth and jaw). Our goal is to find the emotional and grammatical components that differentiate between types of sentences e.g. assertions from conditionals. The outcomes of our research will improve the teaching methods for deaf and hearing people of sign languages, as well as move forward the development of a ASL translation machines. Community involvement and feedback is critical to this development, since without it we cannot improve our tools.
Interested in helping?
We are looking for native ASL signers (deaf or hearing) to be part our test sessions. Participants usually are asked to see a series of videos and classify them among different categories. Each session takes about 1 hour and participants will be reimbursed for their time.
To participate please contact Fabian at email@example.com.