It’s been a few months with Google glass, and while it is interesting, I am still not overly impressed with the tech. The idea is fantastic, but the implementation of applications so far is less than impressive. Given that it is beta, I know things can only get better; however, I am starting to feel that Google is using this as a test platform for their android wear solutions. The Google Now cards, that show up in glass, are starting to be THE way to program for android wear.
Is it the idea that is intriguing, or the actual tech? I keep wondering if this is because of the requirement for tethering when I am out and about. However, even around the house, or at my favorite coffee shop, when I have full wifi access, I am not finding that the existing apps are that useful.
What do you think? Are you going to get glass as an explorer? Do you think it will EVER come out of beta? Or is this all just a test bed?
In my last post I asked, rhetorically, was Google Glass a view of the new interface to come. I didn’t answer the question, nor even really think thru it, as I wanted to thinking on the question overnight. So is it?
I think Google Glass does a great job of changing how you interact with your mobile device. It does so in the same way that Siri and other voice tools changes how you look up information or dial the phone. So to that end, it is nothing new.
However, how it is different is the way you consume the data. Google has a page for developers on how to work with the interface: here. The idea is simple, show what needs to be shown, and get the rest of the interface out of the way. If you’ve been using Google Now and their new Card interface, then you’ve already experienced how Google Glass will provide you data. If you are a developer and have started working on mobile apps or Chrome apps, and are already using Cards, then you understand the basic framework for defining your interface in Google Glass.
In my other podcast, Games At Work dot Biz, we talked about how Augmented Reality apps and the world of Neuromancer are coming together to change the way we interact with devices. An example that came to mind was the ability to two people to work together to perform a task with the use of AR and Google Glass. If you image a doctor or a mechanic performing an unfamiliar task. They can learn by doing, and by wearing Google Glass, see a projection of an expert performing the same task with their hands out in front of them. This POV (or Point of View) experience would allow the to mimic the experts movements, and all to the expert to see the action in a virtual 3D world using something like the Oculus Rift technology.
Bringing these two technologies together, Google Glass and Oculus Rift, could change the way we learn and practice new hands on activities. It also could, dare I say it, “Change the World“.