How Do We Prepare For Artificial Intelligence?

Isaac Asimov’s Zeroth Law of robotics says “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.” It might be a law he made to further the plots of his sci-fi books, but it applies to the real world too. When pioneers like Elon Musk warn the world of the dangers of Artificial Intelligence it is time to sit up and take notice. Hawking could not have been clearer on his stand about A.I. safety “The development of full artificial intelligence could spell the end of the human race.” He believes that the speed of human evolution will not be able to catch that of A.I. and that “It (A.I.) would take off on its own.”

Like a printer will act out on you at the most crucial of times, like a phone will hang up on you, no matter what. The same way an A.I. is also “unlikely to behave optimally all the time”. For the printer and the mobile you have a button that can reboot or reset it. What happens when a machine that can think and evolve, faces a glitch? Or it takes an action that is not specified by humans? As of yet there is no way to stop A.I. while it is performing an action. With this in mind, DeepMind, the AI branch of Alphabet Inc. (the parent company of Google), along with Oxford University researchers published a paper that drafts a “Kill Switch”.

Here’s what you need to know first:

What you and I call “robots” work on Reinforcement learning. What does that mean? Imagine you own a robot that cleans your home. Each time it completes the task, you give it a lollipop as a reward. But, if there are guests at home you press a red button on the robot that stops it from working because you don’t want interference while other people are around. What happens when the robot learns that every time you press the red button it doesn’t get a lollipop? It finds a way to either remove the button or stop you from pressing it. The robot has learnt to control you instead of you controlling him.

This is what derives DeepMind’s research, to find a way that would prevent an AI from taking such actions.

How does The Big Red button work?

The paper explains that there are two methods to achieve the Kill Switch.

  1. Prevent the robot from learning about the human interventions- This means that every time a human interrupts the task of an AI, the robot feels that he “decided on its own to follow a different policy”. Expanding on the above example instead of the robot realising that a human pressed the red button, the robot itself “decided” to stop working. This is called the Interruption Policy.
  2. The second method, safe interruptibility – Ensure that the AI works under the assumption that no such interruption will ever occur again. So, even if you press the red button ten times a day, your robot will keep on working as if you are never going to press the button again.

This solution that Laurent Orseau and Stuart Armstrong detail in their paper can be used “to take control of a robot that is misbehaving” or “to take it out of a delicate situation” or “even to temporarily use it to achieve a task it did not learn to perform”. This does not mean that the interruption method can work on any AI, it has its limitations.

We all know there is no way to foolproof any system but we can take steps to have safeguards in place. “Safe interruptibility agents” are exactly those.

You can read more about the Kill Switch by going on the following link: http://intelligence.org/files/Interruptibility.pdf

Related Posts Plugin for WordPress, Blogger...

Comments

comments

Powered by Facebook Comments