When Dr. Marvin Chun tells people he studies psychology, he is invariably asked, “So do you know what am I thinking?” If previously this caused Chun to become embarrassed, he can now definitively say “Yes, as long as I can put you into an fMRI scanner!” In a way, Chun has actually come close to mind reading.

The way we’re able to do this is to take MRI technology, traditionally used to scan the brain for abnormalities, to map which specific areas of the brain are active. We can then compare this brain imagine to a massive database of brain activity to infer what a person is thinking.

Researchers at Yale showed people faces, then had a computer reconstruct what faces it thought people were looking at, with some pretty awesome results. At around 2:30 in the clip above, some videos reconstructed in a similar fashion are shown. It’s truly incredible.

Human brains are inherently very good at recognizing faces and spaces, we even have specialized areas of the brain devoted to those tasks. While original research focused on these face- or space-specialized areas, there are not many more of them. As Dr. Chun says, “there are no shoe- or cat-specific brain areas – how do we study them?”

The answer turns out to be, get a whole bunch of nerds from various disciplines (computer vision, machine learning, etc.) involved to elicit the subtle patterns that activate with “cat or shoe stimuli”. These methodologies started coming out in 2005, and have really gained speed in the last decade.

If you’re curious about the history of imaging brain research, here is a rough timeline, as per the TEDx talk by Chun below:

  • 2000: first study that showed people reading minds, 80% accuracy.
  • 2010: decoding Yes/No responses from people in vegetative state, 95% accuracy.
  • 2011: showed people a bunch of videos and then used the fMRI activity to guess what clips they were seeing.

This literally is reading out the mind.

  • 2013: using fMRI to map pain to objective metrics, over 93% accuracy.
  • 2013: dream reconstruction using categories (i.e., the “type” of thing the brain is seeing).
  • 2014: limiting analysis to faces to focus it beyond categorical guesses. were able to generate good guesses of what faces the individuals were looking at. A lot of computer vision that allows you to mathematically summarize of a whole array of faces, “eigenfaces”. We’re about 65-70% correct.

It seems obvious that these mind reading capabilities are only going to improve. The resolution of the scans is going to improve, allowing us to get more and more specific about which areas are activating and reconstruct sharper images. The database of all possible brain activity is going to grow, allowing for better matches.

At some point in the not-too-distant future, we will be able to reliably tell what people are thinking – actual mind reading. And then what? Is it going to make society better? Are we going to be able to prevent bad behaviours by catching the intent, a la Minority Report? How do we preserve freedom of thought, as Prof. Farahany calls it?

What do you think?

Join Me!

Get weekly updates on brain tech, fitness, work, and other fun! All summary of learnings, zero spam.