I’ve spent the last few weeks lowering my expectations for Google Glass. When I put on Google’s smart glasses a year ago— Sergey Brin, Google’s co-founder, let the press try on his pair at the company’s developer conference— I found it exhilarating. But many of my tech journalist colleagues have panned the device recently, calling it disorienting, buggy, and hobbled by terrible battery life. Google, too, has worked to lower the bar. The company describes Glass as an early beta product— it has thus far sent out units to hundreds of people who ponied up $1,500 for an early device— and it says that today’s model needs lots of work before it becomes a mass-market gadget.
So as I put on a pair this week, I was expecting to experience the digital equivalent of a machine hacked together with duct tape and construction paper. It wasn’t that. True, the unit I got my hand on at Google’s developer conference in San Francisco this week did have some obvious flaws, among them poor battery life and limited functionality. It also didn’t feel very comfortable on top of my prescription eyeglasses.
Yet I was surprised by how quickly I fell into using Glass, and how, within a few minutes of putting it on, this new thing began to feel like an intuitive way to experience the digital world. After my eye got used to the screen poised at the top right corner of my peripheral vision, and after my fingers got used to the way you control the device by sliding back and forth alongside the frame, Glass stopped feeling like someone’s bizarre, wishful prediction for the future of eyewear.
Instead, the more I used Google’s goggles, the more familiar they began to feel. This was a gadget I’d used before. It’s a gadget you’ve used before. That device is called a smartphone. And when Glass or something like it is finally released as a mainstream product, you’ll use it for the same reason you use headphones— because it’s a natural extension of your phone. It’s like headphones for your eyes. In a good way.
As I’ve written before, my thoughts about Glass are heavily informed by Thad Starner, a computer science professor at the Georgia Institute of Technology who is also a technical lead on the Glass project. Starner has been wearing various kinds of digital goggles since the early 1990s— he built his own devices— and is thus one of the world’s leading authorities on what it’s like to live as a cyborg. He argues, counterintuitively, that the chief advantage of digital goggles is that they allow you to interact with technology in a way that does not interfere with your real-world life. In other words, they make smartphones less distracting than they are today. They achieve this, Starner says, because a tiny, voice-activated screen above your eye is much faster to access— and much less socially awkward— than a big screen you fish out of your pocket and hold far away from your eyes, forming a barrier between you and the rest of the world.
Once I put them on, I saw exactly what Starner meant. To turn on Glass, you tap the frame of your specs, or you nod your head up. When you do so, you see a big, digital clock just off to the side of your central field of vision, and a prompt to say Okay Glass when you’re ready to ask it something. Even this main screen is useful: I don’t wear a wristwatch— I’ve never found them comfortable— and, when I’m not at my PC, I usually check the time on my phone. Glass offers me a quicker, less socially awkward way to access a clock.
I know what you’re thinking: a normal person would just wear a wristwatch. Yes, but even if you do wear a watch, there’s a good chance you look at your phone for dozens of other tiny bits of information during the day— texts, email, directions, photos, and especially Google searches. Starner calls these “microinteractions”— moments when you consult your phone or computer for ephemeral, important information that you need immediately. Glass is made for these moments. Once you say Okay Glass, you’re presented with a menu of possible commands, including performing a Google search, asking for directions, and taking a picture. You can also access Google Now, the company’s predictive personal assistant, by swiping your finger along the frame. This shows you contextual information that you’d usually find on your phone: the weather, sports scores, directions to your hotel.
It took me a minute or so to figure out how to access all this information. Google has built a vocabulary of taps and swipes into the device, and you’ve got to learn the gestures— that swiping forward and back is the equivalent of scrolling, that swiping down is a universal “back” button, and that tapping once is the equivalent of clicking. But once I got the controls, and once I’d positioned the device correctly in front of my glasses, I understood exactly how to use it— it’s just like my phone, but faster. (Google is working on a way to have Glass attach to prescription glasses, by the way.)
For instance, I could see using Glass when I’m cooking. Today, when I’m ready for the next step in a recipe or need to look up, say, the internal temperature of a medium-rare steak, I have to break away from what I’m doing and look at a book or my iPad. With Glass, any information I need is right there, always. Okay Glass, I asked it in one of my first tests. “How many cups in a quart?” In half a second, it spat back the answer on screen and by voice: “There are four cups in a quart.” (Glass uses a “bone conduction” speaker located right around you ear; this means that you can hear it while still keeping your ears free to hear the outside world, even though it isn’t audible to anyone else.)
When I met him, Starner told me that this is how he uses Glass— to search for queries that come up in social situations. At dinner with his wife recently, the conversation turned to cats, and Starner wondered how far they can fall without getting hurt. He asked Glass. Unlike my quarts-to-cups question, Starner’s question— like most Google queries— didn’t bring up a direct answer. Instead, it showed him a snippet of the first link in the search results. You can see additional links by scrolling, but Google’s search is so good that you can often get enough information without doing so. (Cats usually get injured on falls shorter than seven stories; at greater heights they have time to right their bodies and land on their feet.)
These examples might strike you as intrusive and disruptive—exactly the sort of thing you’d feared would come of a digital device attached to your face. It’s true, too, that when someone is accessing information from Glass, his eyes shift up into the corner of his sockets. Depending on the situation— you’re checking sports scores while your friend is confiding in you about his marital troubles—this could be perceived as rude.
On the other hand, shifting your eyes is way, way less distracting than checking your smartphone. Indeed, after using it, I’d argue that pretty much any time you look something up on Glass rather than a phone, you’re choosing a less intrusive way of accessing the digital world. If you want to rid the world of digital interruptions, you’d start by eradicating phones. And if you’ve been hoping that your friends and family would get their heads out of their phones already, you ought to be celebrating Glass.
Still, I don’t want to overpraise the device. Because it’s so new, Glass’s capabilities are still quite limited, and it’s nowhere close to serving as a replacement for a phone in most situations. This week Google announced a program for developers to add new services to Glass; among the companies pitching in are Twitter, Facebook, Tumblr, and Evernote, and I’m hoping many more firms follow their lead.
Even before that happens, though, Glass is off to a great start. Once I had it on my face, I was addicted to the power it gave me, and I couldn’t stop ordering it around: “Show me pictures of dinosaurs.” “Take a picture.” “What time is my flight?” “How long will it take me to get home?” At some point I had to take Glass off. I really, really didn’t want to.
Rico says WHAT
No comments:
Post a Comment
No more Anonymous comments, sorry.