Creating artificial intelligence that ‘thinks’ like humans




Human-like AI requires more than just copying human behavior; it must also be able to analyze information and “think” like humans if it is to be completely trusted.New research from the University of Glasgow’s School of Psychology and Neuroscience, published in the journal Patterns, utilizes 3D modeling to examine how Deep Neural Networks—a subset of machine learning—process information and show how their processing resembles that of humans.

 

It is anticipated that this new research would pave the way for the development of more trustworthy AI systems that can analyze data like humans and make errors that are understandable and predictable.

 

To assure accuracy, one of the problems still facing AI research is to better understand the process of machine thinking and whether it matches how humans absorb information. Deep Neural Networks are sometimes touted as the most accurate model of human decision-making behavior, capable of matching or even outperforming humans in particular tasks. When compared to humans, even seemingly basic visual discrimination tests can show apparent inconsistencies and mistakes in AI models.

 

While Deep Neural Network technology is being employed in applications such as facial recognition, scientists still do not completely understand how these networks process information and, as a result, when mistakes may arise.

 

In this new study, the researchers solved this issue by modeling the visual stimulus that the Deep Neural Network was given and changing it in different ways such that they could establish recognizing similarities between humans and the AI model by processing comparable information.

 

Professor Philippe Schyns, senior author of the study and Director of the Institute of Neuroscience and Technology at the University of Glasgow, said: “When creating AI models that behave “like” humans, such as recognizing a person’s face whenever they see it as a human would, we must ensure that the AI model uses the same information from the face to recognize it as another human would. If the AI doesn’t do this, we may get the impression that it operates just like humans, only to discover that it makes mistakes in new or untested situations.”

 

The researchers utilized a set of customizable 3D faces and asked people to assess how close these faces were to four different identities. They then utilized this data to see if Deep Neural Networks made the same ratings for the same reasons as people, ensuring that not only did humans and AI make the same judgments, but that they were also based on the same data. Importantly, the researchers’ technique allows them to display the data as 3D faces that affect human and network behavior. For example, a substantially caricaturised visage drove a network that accurately categorized 2,000 identities, demonstrating that it detected faces processing considerably different facial information than humans.

 

Researchers believe that this research will pave the path for more reliable AI technology that acts more like people and makes fewer unforeseen errors.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post