Steve Silverman helped build cameras for two NASA rovers that went to Mars. In the less exotic landscape of a Google parking lot, he looks up fondly at his latest creation, bolted onto the roof of a Hyundai hatchback. The gawky assemblage almost doubles the car’s height: four white legs holding up a vertical black stalk sporting eight cameras. “We thought about covering it up, but we’re kind of nerds,” Silverman says. “We’re proud of it.”
Silverman and his team build the hardware that captures imagery for Google Street View, the project that since 2007 has put panoramas of more than 10 million miles of roads, buildings, and the occasional act of public urination online for all to see. The new camera design, the first major upgrade in eight years, started regularly patrolling the streets last month. The data that’s just starting to come back will strengthen Google’s digital grip on the world.
As you might expect if you think back to the camera in your 2009 cell phone, Street View imagery is about to get a lot clearer. Look forward to sliding through the world from your couch in higher resolution and punchier colors. But Google’s new hardware wasn’t designed with just human eyes in mind. The car-top rig includes two cameras that capture still HD images looking out to either side of the vehicle. They’re there to feed clearer, closer shots of buildings and street signs into Google’s image recognition algorithms.
Those algorithms can pore over millions of signs and storefronts without getting tired. By hoovering up vast amounts of information visible on the world’s streets—signs, business names, perhaps even opening hours posted in the window of your corner deli—Google hopes to improve its already formidable digital mapping database. The company, built on the back of algorithms that indexed the web, is using the same strategy on the real world.