Cognitive science research indicates that humans use a form of the three-dimensional model to perceive and take action in the close environment. Now researchers at MIT are using this approach to improve artificial intelligence’s ability to interact in the real world. Professor Josh Tenenbaum, at MIT’s Department of Brain and Cognitive Sciences, claims that AI learns to visualize in 3D will have an immense impact on the development of robots. He explains:
“This is definitely something we’re going to need if we’re going to have robots that interact with the physical world. They have to be able to deal with the fact that the physical world is three-dimensional, and it has stuff in it.”
AI that has the ability to learn like a child
Artificial intelligence is not at all smart. As a matter of fact, a three-month-baby is much smarter than any AI-system, which is something that researchers at MIT have acknowledged. The team has started exploring the fundamentals of human intelligence. Where the aim to in the first step “reverse-engineer human intelligence”. To in the longer-term build a machine that learns like a baby, and in the next step a child. To accomplish this MIT has announced Intelligence Quest (or MIT IQ). This is an initiative that involves 200 experts from different fields to focus on this field of research and find solutions.