Find out what Deep Learning is going on at Google Brain in the artificial intelligence quest.
Research into the science of artificial intelligence (AI) has been around since the 1960s, but it first captured the attention of the public in 1996 when IBM’s supercomputer Deep Blue outfoxed over the former Russian world champion Garry Kasparov in a chess match.
Since then, scientists have been trying to push the frontiers of AI but often with limited success, said Mr Quoc Le, a research scientist at Google Brain — and one of the EmTech 35 Innovators under 35 in 2014.
Google Brain is well known as the project group that undertakes research in Deep Learning.
Speaking to delegates at the recent EmTech Asia conference in Singapore, he said one of the difficulties of AI was developing an often complex array of features and classifiers which would allow computers to recognise common inanimate objects like a cup.
The problem becomes even more profound in identifying living beings, be it a dog, a cat or people, because the volume of identifying features needed by computers would increase dramatically.
“My colleagues spent years trying to invent such features. I began to think that maybe it’s better to train a computer end to end to invent these features for itself and that was how I got involved in researching neural networks and Deep Learning, (which is) a system which involves using networks of simulated neurons that can process vast amounts of data.”
The team went on to prove that without human guidance, the system could learn on its own how to detect cats, people, and over 3,000 other objects with greater levels of accuracy, just by using 10 million images from YouTube videos.
Deep Learning on a large scale, also known as the Google Brain, is now used by the company in its image search and speech-recognition software.
Mr Quoc said that with Deep Learning, Google was able to reduce the error rate in understanding images from 20 per cent in 2012 to 5 per cent today. Over the same time period, errors in speech recognition were also reduced from 23 per cent to 8 per cent.
Using similar algorithms Google improved language understanding for its translation services as well. “The ability to accurately translate language is something people have been trying to do for almost 20 years but using Deep Learning, we were able to do this in one year, matching and exceeding the performance of earlier systems.”
He said one of the distinct advantages of this system was the simplicity of use for its customers. “This gives us the ability to have a conversation with a computer,” he said.
Mr Quoc went on to demonstrate this by asking the machine a series of random questions:
Quoc Le: “What is the purpose of life?”
Computer: “To serve the greater good.”
Quoc Le: “What is the purpose of living?”
Computer: “To live forever.”
Quoc Le: “What is the purpose of emotions?”
Computer: “I do not know.”
Some people, said Mr Quoc, may see this as having a truly intelligent machine — but in reality, the computer was simply recognising certain words and trying to mimic a conversation from its stored data without having a human-like understanding of the articulated words.
In other words, your greatest Hollywood blockbuster nightmare about AI running the world is still happening only on the cinema screen.
At least, for now!
Said Mr Quoc: “So, we are still pretty far from true AI, but these are important steps in developing systems that are able to communicate better with us.”