Making machines learn of the world like a child – AI

This article was 1st published on our sister Site, The Internet Of All Things.

A team of MIT, IBM, and DeepMind researchers has created a computer programcalled a neuro-symbolic concept learner (NS-CL) that learns about the world, though in a simpler manner, how a child might, by looking around and talking.

The team was led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines.

According to The MIT Technology Review, the system is made up of many parts. A neural network is trained on a series of scenes made up of a small number of objects. Another neural network is trained on a series of text-based question-answer pairs about the scene. Questions are: “Q) What’s the color of the sphere?” “A) Red.”

This network learns to map the natural language questions to a simple program that can be run on a scene to produce an answer, said the post.

The NS-CL system is also programmed to understand symbolic concepts in text such as “objects,” “object attributes,” and “spatial relationship.” That knowledge helps NS-CL answer new questions about a different scene—a type of feat that is far more challenging using a connectionist approach alone. The system thus recognises concepts in new questions and can relate them visually to the scene before it.

This hybrid system addresses key limitations of both earlier approaches by combining them. (A) It overcomes the scalability problems of symbolism, which has historically struggled to encode the complexity of human knowledge in an efficient way. (B) It also tackles one of the most common problems with neural networks: the fact that they need huge amounts of data.


 

Click here to opt-out of Google Analytics