Brain scans show why our mind’s eye sees the world so differently from everyday vision


Researchers have discovered a neural overlap between humans and machines that helps explain why what we see in our mind’s eye is different from the information our real eyes process when we observe something in reality.

With the help of an fMRI scanner and artificial neural network, an AI engine designed to mimic the brain, the new study draws parallels between the way the human brain works and the way a computer system works.

In addition to explaining why a dog on its head doesn’t exactly match the image of a real dog, the findings could have important implications for investigating mental health problems and developing artificial neural networks.

“We know that mental imaging is somewhat very similar to vision, but it cannot be exactly identical,” says neuroscientist Thomas Naselaris of the University of Medicine at South Carolina. “We specifically wanted to know how it was different.”

The team used a generative neural network, one that can create images, as well as identify them, with enough training data, and studied how it behaved as they both attempted to analyze sample images and produce their own.

This analysis was compared to activity in human brains as measured by an fMRI scanner. At different stages, volunteers were asked to look at images on a screen and to also imagine mental images of their own within their minds.

Neural activity in the artificial network and the human brain coincided, at least to some extent. The researchers were able to observe similarities in the way that artificial and human neural networks transmitted signals between lower and more diffuse levels of cognition and more precise and higher levels.

In terms of the human brain, looking at something involves precise signaling from the retina of the eye to the visual cortex of the brain. When we’re just imagining something, that signage becomes more blurred and less accurate.

“When you’re imagining, brain activity is less accurate,” says Naselaris. “It’s less tuned to the details, which means that the kind of blur and blur you experience in your mental images has some basis in brain activity.”

mind's eye 2An example of a training image for a neural network. (Zachi Evenor / Wikimedia Commons / CC BY 4.0)

mind's eye 3Computer generated comparison image. (Guenther Noack / Wikimedia Commons / CC BY 4.0)

Neuronal activity in other parts of the brain but outside the visual cortex seems to coincide with imagined and viewed images, a link that could help scientists better understand how our brains can suffer and recover from trauma.

In the case of post-traumatic stress disorder (PTSD), for example, those affected are often disturbed by intrusive memories and pictures in their minds. Once we can understand why these imaginary images are so vivid, we can stop them.

The researchers acknowledge that there are limitations and alternative explanations for their results. For example, subjects may not remember images as such, but rather general subjects. It is practically impossible to determine what the visual representation of an image looks like in our brain, leaving room for interpretation.

Still, the study offers a lot of interesting data on how images are represented inside our heads in terms of neural activity, and how we could train artificial neural networks to improve simulation of the same trick.

“The extent to which the brain differs from what the machine is doing gives you some important clues about how the brains and machines differ,” says Naselaris. “Ideally, they can point in a direction that could help make machine learning more brain-like.”

The research has been published in Current biology.

.