Microsoft research chief: AI systems of today aren’t a threat to the human race

For the AI naysayers, Microsoft chief of research has a solid answer : AI is still too stupid to wipe humans out. Chris Bishop, Microsoft’s director of research at Cambridge dismissed the fear that humans are on the verge of developing an artificial intelligence whose abilities far outdo our own. Highlighting the many limitations of AI systems, he added that it will continue to lag human performance for decades to come.

“This is a good moment for a little reality check,” he told a public discussion hosted by The Royal Society in London last week.

“Yes, deep learning has achieved human-level performance in object recognition, but what does that mean?” Bishop asks. “It means the machine makes about the same number of errors as the human.”

“The reason the machine is as good as the human at this is because it can distinguish between 157 varieties of mushroom, whereas it makes all kinds of stupid mistakes that humans wouldn’t make.”

Bishop stresses even hyped examples of machine intelligence, such as Google DeepMind’s Go-playing system need to be understood in context of the time and effort that went into building the system.



“[Take] the Go example, where the machine has just about crept ahead of the best human. The machine saw at least 10,000-times as many Go games as the human saw. Human capabilities still far outstrip machines in many areas,” echoing researchers who highlight the trouble robots have with picking up items and walking.

He also cites that another common misconception is that since machine learning systems can do some of the individual tasks that people can do means they are on the brink of matching more general human abilities.

Maja Pantic, professor of affective and behavioural computing at Imperial College London says this myth is demystified by the fact that developing generalized systems capable of solving any possible problem was proved impossible.

“What people were thinking at the time was to build generic systems that would solve any possible problem. Then they realised this was completely impossible,” she told the debate.

Even though Bishop downplays doubts of homicidal AIs overpowering humanity, he admits more ordinary dangers and risks essential to the technology. For instance, Bishop notes the opaque nature of deep neural networks raises the possibility the AI’s decisions could be subject to unknown biases, originating from the huge amount of data such systems are trained on.

Bishop may be right but many experts including a Oxford study have warned that intelligent AI is a threat to humans in long run.