There are spate of ways to figure out where a photograph was taken . From architecture to wildlife to vesture styles , we ’ve been conditioned to recognize certain clue that might point the location of a scene . Now Google has develop a fresh A.I. system that can outstrip humans in this area using only the visual selective information in a picture .
According toMIT Technology Review , the new neural web , dubbed PlaNet , was fertilize 2.3 million images from Flickr to prove its capabilities . By looking at the pixels in each image , the system was able-bodied to set the film ’s country of lineage 28.4 percentage of the time and the continent with a success rate of 48 percent .
Instead of using GPS data , the software bases its guesswork off a immense database of geotagged images collected from the Internet . It ’s even able to discern the location of effigy with no obvious clew , like those taken of objects indoors , by comparing them to other picture in the same album .

The team behind the project enunciate this database gives PlaNet a leg up over its human competitors , because it has see and assemble data from more places on Earth than one person could ever possibly call in . This idea was supported when the program went school principal - to - forefront against 10 well - traveled individuals to see who could recognize the most locations . The A.I. tucker out the human team in 28 of the 50 rounds . If you need see how your location recognition abilities pile up against PlaNet , you could play theGeoguessrgame online .
This new software is an exciting ontogeny in Google’sartificial intelligencetechnology , which is already able togenerate automatic email responsesand grow someinsane - looking artwork . And what ’s even more telling about PlaNet is that it works without assume up too much storage . The plan requires just 337 megabytes to execute , which is small enough to match on a smartphone .
[ h / tMIT Technology Review ]