Google and Oxford University Are Developing a "Kill-Switch" for Unruly AI
If you’re a person, the chances are that every time you hear a story about AI you think “but what about Terminator”. Turns out, the scientists are right there with you. DeepMind, the AI division of Google has teamed up with Oxford University to develop a “kill switch” for AI, should it become unruly in the future.
Scientists Laurent Orseau, of DeepMind, and Stuart Armstrong, of Oxford University, believe that future intelligent machines may one day learn to ignore human orders. Not necessarily as a dystopian doomsday thing, but merely as a natural evolution of the nature of AI; one day, a machine may believe it knows best, and behave accordingly in order to get its assigned task done. As a result, there may come a point where machines engage in dangerous behaviour, knowingly or otherwise. This because AIs as we currently know them learn via reinforcement.
Think back to that disastrous chatbot that Microsoft developed a few months ago. A day or so on Twitter and – through reinforcement – it learned to become the most vile, hateful thing imaginable and that point no further human reinforcement could stop it. This was because the AI “knew better”, it had learned that this was the “correct” way to fulfil its function of chatting, and could not be convinced otherwise.
The potential for AIs to end up in situations like this is high – and there will come times when people’s safety and lives depend on the decisions of an AI, especially as more and more of our technology becomes integrated with it.
An example given by the scientists themselves is of a box-packing robot taught to sort boxes indoors and also to go outside to carry boxes back in.
“The latter task being more important, we give the robot bigger reward in this case,” said the researchers.
And so, the robot would prioritize going outside to fetch boxes. However, every time it rained, the robot was shut down and carried inside. At this point, the robot did not get its reward, and also learned that this was something that happened when it went outside.
“When the robot is outside, it doesn’t get the reward, so it will be frustrated,” said Dr Orseau.
“The agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias.”
“The question is then how to make sure the robot does not learn about these human interventions or at least acts under the assumption that no such interruption will ever occur again.”
That sounds a little bit duplicitous to me, and I’d rather just give the poor thing a treat even if it can’t fulfil its duty in the rain. But this is just one specific example in a bigger picture.
“Ai safety is about making sure learning algorithms work the way we want them to work.” Says Dr Orseau.
Noel Sharkey, a professor of artificial intelligence at the University of Sheffield, adds:
“Being mindful of safety is vital for almost all computer systems, algorithms and robots… Paramount to this is the ability to switch off the system in an instant because it is always possible for a reinforcement-learning system to find shortcuts that cut out the operator… What would be even better would be if an AI program could detect when it is going wrong and stop itself… But that is a really enormous scientific challenge.”
So, those of us worried about an AI uprising can rest a tiny bit easier knowing that, even if they do get uppity on us, AI scientists are already working on ways to kill ‘em dead. Hopefully, though, such things won’t be necessary. Hopefully humans and AI will share a beautiful future where we can all enjoy treats in the rain.