Researchers from Purdue University developed a model that can decode what the human brain is seeing by using deep learning to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.
“That type of network (convolutional neural network) has made an enormous impact in the field of computer vision in recent years,” said Zhongming Liu, an assistant professor in Purdue University’s Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering. “Our technique uses the neural network to understand what you are seeing.”
According to the researcher’s paper, the new findings represent the first time such an approach has been used to see how the brain processes movies of natural scenes, a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings
Using GTX 1080 GPUs and the cuDNN-accelerated Caffe deep learning framework, the researchers trained their convolutional neural network model on more than 11 hours of fMRI data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes. Once trained, they used the model to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.
“For example, a water animal, the moon, a turtle, a person, a bird in flight,” said doctoral student Haiguang Wen. “I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs.”
The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called cross-subject encoding and decoding. This finding is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits.
Read more >