Google uses Neural networks to decipher numbers in Images by robots
Google researchers have developed technology that nearly perfectly deciphers the distorted combinations of numbers and letters commonly used on the internet to test whether or not someone is human.
Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet, set out todevelop a more accurate method to identify numbers in images taken for Google Street View. Their model identified, with better than 90 per cent accuracy, tens of millions of numbers contained in Street View images taken in a dozen countries.
Using the model in conjunction with Google's infrastructure, it takes less than an hour to transcribe all the views of street numbers in France.
The technology was also pitted against the hardest category of the character-blurring forms known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). It is based on the concept of the Turing test, which is a standard way to assess whether a human can identify, via verbal exchanges, whether a respondent is human or a computer.
Tested against reCAPTCHA, Google's own variation, Mr Goodfellow and his team achieved near perfect results.
"Today, distorted text in reCAPTCHA serves increasingly as a medium to capture user engagements rather than a reverse Turing in and of itself," the researchers wrote. "These results do, however, indicate that the utility of distorted text as a reverse Turing test by itself is signiﬁcantly diminished."
Google's technology uses a system called DistBelief, which leverages a scientific concept called convolutional neural networks. Inspired by biology, these neuron structures mimic the complex arrangement of cells within the part of the brain that interprets images.