(Mirror Daily, United States) – Merida is the go to story when wanting to teach children about bravery Pinocchio is the poster character when it comes to the consequences of lying and the benefits of telling the truth and The Happy Prince is a tear jerker that teaches the little ones about the importance of kindness. Fairytales have always been a way to teach children the value of morality, and now robots will learn to discern right from wrong from fairytales, too.
The present scientific and technological advancement has allowed scientists to build sophisticated machines powered by artificial intelligence. But the autonomous operating system is rather similar to a child in the sense that it doesn’t understand how the grown-up world works. In order to fix that flaw, robots will learn to discern right from wrong from fairytales.
Researchers from the Institute of Technology in Georgia are working on a program named Quixote that will teach the robots to differentiate between the concepts of right and wrong. Apparently Quixote is based on a treat system, something quite similar to that used to train dogs.
There are no official statements of Caesar Milan’s involvement in the Quixote program. Also, it is still unclear what the positive and negative stimuli that can be applied to an artificial intelligence are exactly. Maybe Quixote limits their access to the internet or throws dust on their motherboards.
Renowned scientists like Stephen Hawking and Elon Musk have expressed their concerns about the possible repercussions of such a technological advancement. But it seems that their fears could be eradicated by Cinderella and Snow White because robots will learn to discern right from wrong from fairytales.
There were many voices in the scientific community that said that the laws of robotics that were invented by Asimov were enough to keep the new generation of AI-powered machines in hand. But it may be that the same scientist didn’t have time to finish Asimov’s book, because if they did they would realize that even those seemingly clear rules can have subtle nuances that would make domestic robots turn into vicious murderers.
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
“A robot must obey orders given it by human beings except where such orders would conflict with the First Law.”
“A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. “
The researchers behind the Quixote program want to use the moral power of fairytales and other children’s tales to help the robots learn the difference between good and bad and right and wrong.
Robots will learn to discern right from wrong from fairytales and other morality-filled stories. Let us hope that the researchers know their literature because subtle nuances could become a big problem in teaching artificial intelligence about the wickedness of the “villains” in some stories.
Also, it may be preferable that Quixote stays the name of the program, and not one of the books used by the researchers because the morality in that novel is confusing, to say the least.
Image source: www.wikimedia.org