How to Teach Machines to Reason?

As we all know, the machine learning and deep learning are both based on statistics and logic that are built into the systems, which also becomes the biggest the difference between human beings and machines. 

However, recently, researches and labs are doing experiments to try to teach machines how to reason. Is it fascinating?


Here is one of the examples: MIT - IBM Lab Researches trained a hybrid AI model to answer questions like "Does the red object left of the green cube have the same shape as the purple matter thing?" By feeding it examples of objective colors and shapes followed by complex scenarios involving multi-object comparisons. The model could transfer this knowledge to new scenarios as well or better than state-of-the-art models using a fraction of the training data. 

This is the just the beginning. Researchers and scientists combine statistical and symbolic artificial intelligence techniques to speed learning and improve transparency. 

Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa’s latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits. It requires tons of data, has trouble explaining its decisions, and is terrible at applying past knowledge to new situations; It can’t comprehend an elephant that’s pink instead of gray.  

To give computers the ability to reason like human beings, AI researches are returning back to abstract, or symbolic programming. Popular in the 1950s and 1960s, symbolic AI wires in the rules and logic that allow machines to make comparisons and interpret how objects and entitles relate. Symbolic AI uses less data, records the chains of steps it takes to reach a decision, and when combined with the brute processing power of statistical neural networks, it can even beat humans in a complicated image comprehension test. 

Like the exercise above, the study is a strong argument for moving back toward abstract-program approaches. The trick part of it is to add more symbolic structure, and to feed the neural networks a representation of the world that's divided into objects and properties rather than feeding it raw images. This work gives us insight into what machines need to understand before language learning is possible. 

Key to the team's approach is a perception module that translates the image into an object-based representation, making the programs easier to execute. Also it uniquely used curriculum learning, or selectively training model on concepts and scenes that grow progressively more difficult. Feeding the machines data in a logic way rather than haphazardly helps the model learn faster while improving accuracy. 

Although statistical, deep learning models are now embedded in daily life, much of their decision processes hidden from view. This lack of transparency makes it difficult to anticipate where the system is susceptible to manipulate, error, or bias. Adding a symbolic layer can open the black box, explaining the growing interest in hybrid AI systems. 


The MIT-IBM team is now working to improve the model's performance on real-world photos and extending it to video understanding and robotic manipulation. 

 

 

 

 

 

 

 

Source: MIT news.