The human brain can take in a scene and make sense of it in virtually no time at all. “We are a very successful species at that,” says Bruce Hansen, associate professor of psychology and neuroscience. “You can understand a situation in less time than it takes to blink your eyes.” For all of their computational power, computers have lagged far behind humans in their ability to take in and categorize visual information.

That may be changing, however, with the construction of a new class of computer processing network, designed to mimic the way the human brain processes images. Called a convolutional neural network (CNN), the system consists of a series of artificial neurons connected to each other in layers, with each layer passing information to the next to create an increasingly sophisticated analysis of a visual scene. “They have kind of taken computer vision by storm,” says Hansen. “For constrained categorization tasks, they perform at the level humans do.”

Despite the promise of CNNs, however, scientists have only a vague understanding of whether they accurately portray how our own brains process vision, or if they follow a totally different computerized path. In a series of studies, Hansen has been working to look under the hood of these networks to analyze just how they work — and how they might help us better understand the workings of our own human brains. In one study published in July 2018 in PLOS Computational Biology, Hansen and Michelle Greene of Bates College found that, in fact, the CNNs process images in stages that correspond uncannily to the same stages in the human brain, not only in terms of order, but also in terms of timing.

Despite the promise of CNNs, however, scientists have only a vague understanding of whether they accurately portray how our own brains process vision, or if they follow a totally different computerized path.

For their study, the researchers came up with some 2,000 photographs in various categories — some general and some very specific — including woodlands, roads, bamboo forest, and youth hostel. They hooked up 14 human subjects to an EEG (electroencephalogram) device to monitor how signals progress through their brains as they sorted the images into categories. They then subjected a CNN to the same images. They found that the timing of signals progressing through the different layers of the computerized brain corresponded to those in the human brain. “As you go deeper into the layers of the computer network, those layers map out onto increasingly later signals from the brain,” Hansen says.

The computer networks, Hansen says, progress through a logical series of questions in order to accurately categorize images. In the first layers, it determines the lines and edges in a scene, without regard to where one object begins and another ends; next, it gathers information on texture; then, it begins determining actual objects before eventually deciding on the broader category.

Portrait of Professor Bruce Hansen

Associate Professor of Psychology and Neuroscience Bruce Hansen

For another set of experiments, Hansen has reverse-engineered the process from the human brain scenes, feeding the results from human EEGs into computational networks based on machine learning algorithms to see if they are able to come up with the same categories for images they never actually see. After training the networks with sample results from the brain scans, Hansen has found so far that the networks are able to predict accurately other random scans they are fed. “You can ask questions about what mistakes the networks are making, and whether they correlate with the same mistakes the brain is making,” Hansen says.

The computer networks aren’t functioning exactly like their grey matter counterparts, however. While human brains seem to go through a similar pattern of determining lines, textures, and objects in order to decode a visual scene, they also seem to jump quickly to examining what can be done functionally with the objects in a scene at the same time. “At the same time we are building a structural representation, we are also realizing it is navigable — that we can do something with it,” Hansen says.

Hansen is continuing to explore how computer networks can replicate the complexity of the human brain. While his research remains on the level of basic science, eventually it could lead to creation of computer algorithms that can accurately model how humans process visual information. That could, in turn, be used to test hypotheses about human perception — including neurodegenerative disorders that might affect visual processing and memory — before testing with animals and human subjects. “If we can understand how we are mapping visual input from low-level to high-level organization,” Hansen says, “it has the potential to inform us about how we map things in memory, including how that process deteriorates.”