Scientists at the University of California, Berkeley, have reconstructed the internal “movie” that plays in a person’s head. To re-create dynamic visual experiences, they used functional magnetic resonance imaging (fMRI) to measure the brain activity of volunteers (the other members of the research team) as they watched short movie clips (left panel in the video below). A computational model crunched the fMRI data to reproduce the images, as shown in the right panel.
The team, led by Shinji Nishimoto and Jack Gallant, say that the technology is decades away from enabling people to read others’ thoughts and intentions. It could become a powerful tool to communicate with people who cannot verbalize, such as stroke victim and coma patients. This visual image reconstruction study appears in the September 22 Current Biology.
The left clip is a segment of the movie that the subject viewed while in the magnet. The right clip shows the reconstruction of this movie from brain activity measured using fMRI. The reconstruction was obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video. (In brief, the algorithm processes each of the 18 million clips through the brain model, and identifies the clips that would have produced brain activity as similar to the measured brain activity as possible. The clips used to fit the model, those used to test the model and those used to reconstruct the stimulus were entirely separate.) Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately.