Published on December 17th, 2020 | by Emergent Enterprise0
Google’s ‘Internet-of-Places’ Inches Forward
How is the world going to become populated with AR tags and anchors? By us, the users. Mike Boland reports at AR Insider about the effort by Google AR to equip users and developers so they can tag locations everywhere with data. This is making our surroundings a virtual encyclopedia of information so we can have content pop up in our devices. Designed correctly, this makes any location saturated with pertinent and timely media assets. Be prepared for lots of info popping up on your smartphone, smart glasses or AR contact lenses.
Photo source: Google
Tech giants see different versions of spatial computing’s future. These visions often trace back to their core businesses. Facebook wants to be the social layer to the spatial web, while Amazon wants to be the commerce layer and Apple wants a hardware-centric multi-device play.
Where does Google fit in all of this? It wants to be the knowledge layer of the spatial web. Just like it amassed immense value indexing the web and building a knowledge graph, it wants to index the physical world and be its relevance authority. This is what we call the Internet of places (IoP).
Besides financial incentive to future-proof its core search business with next-generation visual interfaces — per our ongoing “follow the money” exercise — Google’s actual moves triangulate an IoP play. That includes its “search what you see” Google Lens, and Live View 3D navigation.
Clues we’re tracking continue to validate this path. After Google’s recently-unveiled storefront visual search feature, last week it announced a new crowdsourcing effort for assembling 3D maps. This could help it to scale up the underlying data that it needs to get closer to an IoP reality.
Going deeper on Google’s latest announcement, a “connected photos” feature in the Android Street View app lets users contribute imagery to the Google Maps database. It will let them walk down a given street or path with an upheld smartphone as the app captures several frames.
Google will take care of the 3D image stitching on the back end. This means that for the first time, a 360 degree camera isn’t needed to capture Street View image capture. Of course, quality and visual acuity of these images won’t be as good as its Street View cars, but it will scale better.
That brings us to Google’s intentions for this move. Its stated purpose is to let users help improve Street View and to get last-mile imagery where cars can’t travel. But we also think this move will help it continue to assemble 3D image data for its AR cloud and IoP ambitions.
Put another way, motivating users to capture wide swaths of imagery in the above ways feeds into a more comprehensive mesh of 3D image data. Having robust 3D image data gets it closer to spatially-anchored AR experiences that have location relevance. In other words….an IoP.
The crowdsourced approach is also aligned with common AR-construction approaches. Niantic is doing similar by capturing real-world spatial maps by Pokemon Go players. To enable “AR everywhere,” spatial mapping needs to happen at planet scale, which requires some help.