Google recently announced the specifications of the inaugural version of Glass, the company’s augmented reality glasses technology, where information gets beamed directly into your point of view.
The tech blogosphere is focused on the specifications, like the 5-megapixel camera, 720p video recording, and 12 gigabytes of storage.
However, the ultimate success of Google Glass may be dictated by external developers who provide functionality we can’t even imagine right now.
Ten years ago, it seemed obvious that one day, smartphones would be good for activities such as photography, music/video playback, and gaming.
Why? Because the functionality was there, just in lo-fi form.
It wasn’t until the iPhone was released in 2007 that we saw how good mobile Internet could be.
But look at what’s happening today.
Today, you can use an iPad as a music synthesizer, or as a reading education tool for a 3-year old.
Taking it a step further, 10 years ago, how many people imagined that retailers such as Sephora and Urban Outfitters would be using iPhones (which themselves didn’t exist) as cash registers?
And this, the ability of software developers to think of interesting new ways for us to use our stuff, is why I’m excited about Google Glass.
What we see now — taking pictures, receiving emails, online video chats, is only the beginning.
What if Glass can tell you how far you hit a golf ball? Or automatically record a video of you putting down your car keys? Or warn you that a shady character’s hiding in a doorway 25 feet in front of you?
The possibilities really are endless.