a new study from MIT cognitive neuroscientists [has] shown that given the right setting, the human brain can record an amazing amount of information.
In the study, the results of which could have implications for artificial intelligence and for understanding memory disorders, people viewed thousands of objects over five hours. Remarkably, afterward they were able to remember each object in great detail. […]
The new results suggest that visual capacity is several orders of magnitude higher than the older study implied. “If you encode a lot of detail for each object, you need a lot more space,” Alvarez said.
Earlier studies had shown that people could remember a lot, but it was assumed that we did it by remembering abstract descriptions without too much details. In this study, people not only remembered thousands of images (success rate of around 90% after seeing each image for 3 seconds), but also many details about them (a kitchen cabinet with the door ajar, a glass of water 2/3 full, etc) and could pick the one they had seen before when also shown a slightly altered version.
According to the researchers, two things helped people perform better: Telling them to actively try to remember details, and showing them familiar objects (a remote control, not abstract art).
The former probably just further confirms our intuition that we remember better when we make a conscious effort, and the latter probably means that we don’t make a completely new memory when we can reuse the invariable parts of already existing concepts. In other words, it seems like memory is modular, making it easier to put a pointer to an existing module for “chair” with extra information for “what type of chair”, “what color”, “seen from what angle”, etc, than to create a whole new memory from scratch (for an abstract painting).
It’s the same reason why it’s much easier to remember what someone said in a language that you understand than in a language that you don’t. In one instance, you just create modules pointing to already existing modules for words and concepts. This is further simplified because we have evolved brain-hardware to make processing language easier (the equivalent of a DSP chip in electronics?). In the case of a foreign language, you’d have to create many more modules to try to remember phonetically all the sounds you heard in the right order, a task for which we don’t seem to have dedicated brain-hardware.
I’m just speculating based on my limited knowledge of cognitive science. I’m sure a lot more is known about the above, and I’m looking forward to reading about it in the neuroscience books in my “to read” pile.
These results establish a new bound on the size of human memory, and give credence to artificial intelligence approaches that depend primarily on a large memory capacity.
This certainly has big implications for those who try to create AI by modeling the human brain. Probably not as much for those who are attempting to design AI from scratch because they have a much larger possible design-space.
Source: MIT News