In my last post I asked, rhetorically, was Google Glass a view of the new interface to come. I didn’t answer the question, nor even really think thru it, as I wanted to thinking on the question overnight. So is it?
I think Google Glass does a great job of changing how you interact with your mobile device. It does so in the same way that Siri and other voice tools changes how you look up information or dial the phone. So to that end, it is nothing new.
However, how it is different is the way you consume the data. Google has a page for developers on how to work with the interface: here. The idea is simple, show what needs to be shown, and get the rest of the interface out of the way. If you’ve been using Google Now and their new Card interface, then you’ve already experienced how Google Glass will provide you data. If you are a developer and have started working on mobile apps or Chrome apps, and are already using Cards, then you understand the basic framework for defining your interface in Google Glass.
In my other podcast, Games At Work dot Biz, we talked about how Augmented Reality apps and the world of Neuromancer are coming together to change the way we interact with devices. An example that came to mind was the ability to two people to work together to perform a task with the use of AR and Google Glass. If you image a doctor or a mechanic performing an unfamiliar task. They can learn by doing, and by wearing Google Glass, see a projection of an expert performing the same task with their hands out in front of them. This POV (or Point of View) experience would allow the to mimic the experts movements, and all to the expert to see the action in a virtual 3D world using something like the Oculus Rift technology.
Bringing these two technologies together, Google Glass and Oculus Rift, could change the way we learn and practice new hands on activities. It also could, dare I say it, “Change the World“.