John Biggs of TechCrunch recently discussed an intriguing program at Carnegie Mellon that uses a complex algorithm and Google Street View to identify cities based on their unique traits. The project description states, "Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle."
"In addition," the description continues, "we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner."
Biggs comments, "The system currently works in multiple cities using large samples of images from cities around the world. Using these, the system can identify where a random photo was taken with some degree of accuracy. Interestingly, the system can also be used on everyday objects, including 'discovering stylistic elements in other weakly supervised settings, e.g. What makes an Apple product?’ You can download the study PDF here."
Image: Courtesy Flickr/ HerryLawford