Published on March 9th, 2020 | by Emergent Enterprise0
Envision Brings AI to Google Glass to Help Visually Impaired Users See
As emergent technologies become more widely used, it’s inevitable that there will be “mashups” of the technologies resulting in highly innovative solutions. Jeremy Horwitz reports at VentureBeat on an example of a mashup of artificial intelligence and augmented reality that helps low-vision and blind persons “see” the world around them. It makes sense, OCR can read the signs or other text and computer vision can recognize objects and all report to the wearer audibly. This combination has a lot of potential use cases as AR users often need help to understand what they are looking at. For instance, a technician may need help identifying a machine part and a combination of AI and AR can step in and make that clear. Watch for more technology mashups that will change business and the world.
Photo credit: Envision
Google Glass might not have made the best impression when it first came out years ago, but the concept of a glasses-sized computer with a small screen, camera, and speaker had promise, particularly for specific applications. Today, Envision is debuting Envision Glasses, an AI-powered augmentation of Google Glass that can help visually impaired users “see” their environments.
Envision Glasses are a complete solution, combining Google Glass Enterprise Edition 2 with OCR and computer vision software to identify what’s in the wearer’s environment, then speak it out loud using Glass’ built-in speaker. Instead of holding up a smartphone and using its camera and software to read signs or identify people — the experience in Envision’s Android and iOS apps — the company has made the same AI technologies accessible from lightweight glasses frames, dramatically improving the real-world recognition experience for blind and low-vision users.