Published on December 17th, 2019 | by Emergent Enterprise0
Google Maps 101: How Imagery Powers Our Map
How long does it take to map the world? Around 12 years or so. In a recent blog post at Google, Senior Product Manager Thomas Escobar shared how the Google Maps team is nearly reaching that goal. And then there is the satellite imagery of Google Earth. This is vast amounts of content available to companies to integrate into employee solutions such as onboarding apps, safety training and more. A new hire can be sent to any location of a global company and take a VR tour – without leaving the home office. An energy company engineer can survey a remote oil field and never set foot on the actual location. Really, the possibilities are endless. Tap into Google Maps/Earth with your AR, VR and AI and watch the globe come to your doorstep.
Earlier this year, we gave you a look at how Google Maps maps the world. Today, we’ll dive deeper into a main ingredient of the map making process– imagery–and how it powers one of our most popular features.
More than just pictures
When you think of imagery and Google Maps, you probably think of the Street View cars and trekkers that collect billions of images from all around the world. Today, we’ve captured more than 10 million miles of Street View imagery–a distance that could circle the globe more than 400 times!
Or your thoughts may jump to Google Earth, our platform that lets you browse more than 36 million square miles of high definition satellite images from various providers–covering more than 98% of the entire population–to see the world from above. While these stunning photos show us parts of the world we may never get a chance to visit, they also help Google Maps accurately model a world that is changing each day.
How we collect imagery: cars, trekkers, flocks of sheep and laser beams
Gathering imagery is no small task. It can take anywhere from days to weeks, and requires a fleet of Street View cars, each equipped with nine cameras that capture high-definition imagery from every vantage point possible. These cameras are athermal, meaning that they’re designed to handle extreme temperatures without changing focus so they can function in a range of environments—- from Death Valley during the peak of the summer to the snowy mountains of Nepal in the winter. Each Street View car includes its own photo processing center and lidar sensors that use laser beams to accurately measure distance.
There’s also the Street View trekker, a backpack that collects imagery from places where driving isn’t possible. These trekkers are carried by boats, sheep, camels, and even scout troops to gather high quality photos from multiple angles, often in some of the hardest-to-map places around the world. In 2019 alone, Street View images from the Google Maps community have helped us assign addresses to nearly seven million buildings in previously under-mapped places like Armenia, Bermuda, Lebanon, Myanmar, Tonga, Zanzibar and Zimbabwe.
How we process imagery: a vintage technique made new
Once we’ve collected photos, we use a technique called photogrammetry to align and stitch together a single set of images. These images show us critically important details about an area–things like roads, lane markings, buildings and rivers, along with the precise distance between each of these objects. All of this information is gathered without ever needing to set foot in the location itself.
Photogrammetry is not new. While it originated in the early 1900s, Google’s approach is unique in that it utilizes billions of images, similar to putting a giant jigsaw puzzle together that spans the entire globe. By refining our photogrammetry technique over the last 10 years, we’re now able to align imagery from multiple sources–Street View, aerial, and satellite imagery, along with authoritative datasets–with accuracy down to the meter.