Tuesday, October 29, 2013

Design Principles for Smart Glasses (Revisited)

I've been designing for smart glasses for over a year now, including Google Glass and similar products that will be on the market in 2014. I posted my design principles a year ago, but I've evolved them as we've learned from the platform. Here they are: 




Less is More
This commonly seen design principle is never truer than with smart glasses. When you are dealing with a person’s line of sight, you need to take extreme care with every design decision. It’s easy to add features and information, but much harder to take them away because it requires a true understanding of your users and their priorities.
In the design of Skylight, we took extensive measures to drive our user interface down to the absolute minimum while still optimizing its value. Given the industrially oriented environments that we are designing for, we know it is more important to create a minimalist and usable interface than a flashy Iron Man-like interface. This type of thinking led to our Northstar interface, which nests content and features into tiny, unobtrusive virtual points of light. The result is an interface that is almost invisible at times, only revealing itself as the user commands it to.
Don’t Make People Change
Smart glasses are an entirely new type of device. There are no design standards, customer expectations, or market leader to align with. We have no choice but to break new ground and a learning curve is unavoidable.  This doesn’t mean, however, that we have make people change how they behave. We know that if we want to create a usable product, we need to begin with the way that people already act, and design against that.
Very early in our process, we examined common human behaviors, such as how people use mobile devices, write in notebooks, look around their environment, and interact with each other. These everyday activities influenced our design decisions on a daily basis. The result is a product experience that is so unlike anything that’s come before it, yet still familiar. For example, our Northstar head-tracking user interface is based on the concept of visual scanning, the pattern of visually examining an environment for relevant information before focusing in. Actions are then taken by gazing on virtual objects that toggle on and off like light switches. If you know how to look around a room, you know how to interact with Northstar. No hand gestures to learn, no voice commands to remember.
Design for Daily Use
We’ve seen some truly amazing technology demonstrations through the years at conferences and trade shows. If you could step in our R&D lab, you would be floored by the technical feats that our team is capable of. But, we don’t design for demos. We design products for real people to use every day in the workplace. We need to prioritize human factors, like usability, fatigue, and error prevention, in place of more eye-catching features like spinning 3D models with swiping gestures. It was this set of priorities that led to our use of head tracking for interaction. Physical gestures would become far too tiring during a standard work shift and voice input is simply not viable in many industrial environments. Alternatively, head tracking requires so little effort, both physically and mentally, that it emerged as an obvious fit for enterprise scenarios.
Distinguish Glasses from the Real World
One of the advantages of a stereoscopic display is that we can project user interfaces as if they were overlaid on the real world. There are still needs, however, for information to be projected as if it were fixed to the user’s glasses, like a personal heads-up display. In our system, the real world interfaces are designed for interaction using head movements while the personal heads-up display is more “read only” for awareness and notifications. The challenge from a design standpoint is offering the user proper distinction so they immediately and intuitively understand what is a real world element is what is fixed to their display.
We addressed this challenge with slight variations in the design language between the 3D real world and the 2D heads-up display. In our system, real world interface elements have rounded corners and options are primarily circular, offering the intuitive design affordance of a button. This rounded look also blends better into the organic real world. Elements on the 2D heads-up display, by contrast, consist primarily of straight lines and hard angles. This creates a flatter, more prominent, and more “techno-centric” look that is a better fit for a heads-up display. While the edges and shapes may vary, however, the two styles use the same color palette, typography, and other visual cues. The result is a cohesive user interface that exists in two dimensions, resulting in a learnable and usable product experience.
Make a Commitment to Glasses
People will use smart glasses for a reason. Maybe they’re a “desk-less worker” where they want their information in the context of their workplace instead of tied to a fixed location or device. Or, maybe their hands are tied up, carrying packages, using equipment, or treating patients. For these people, we needed to design a completely hands-free user experience. After all, if you need to swipe and tap with your fingers to interact with your glasses, you might as well use a smartphone or tablet.
We made a commitment right from the beginning to “go all in” on glasses. That meant that every behavior, from calling a coworker to taking a picture, needed to be supported without the need for physical interaction. Even our Settings and Registration interfaces are hands free.  We believe this was the right choice for our customer base – people that may be wearing gloves, working in sanitary (or unsanitary!) environments, or simply want the most efficient experience possible.
Provide Convenient Access to Secondary Features
We were presented with a design challenge at a very early stage in the development process. Our Northstar and HUD interfaces worked perfectly together, but it was an incomplete system. What if someone wanted to make a video call, take a photo, or modify user settings? We needed a way to support secondary behaviors quickly and conveniently without obstructing from the primary experience.
We first came up with a concept we called the “personal space” – named for the area below the field of view, close to body, usually reserved for items of personal attention (mobile devices, notebooks, magazines). We figured this to be a natural placement for our configuring and accessing additional content. As we experimented with the technology, our thinking evolved and we learned that the most convenient placement for our menu system was actually above the field of view instead of below. So, we created a truly unique menu interface that is nested just above the user’s normal line of sight. By simply glancing upwards and gazing upon the menu icon, the user is able to open and access a range of hands-free smart glasses features, such as Messages, Contacts, and even a Camera.
Design for Varied Environments
Transparent displays present a significant design challenge as you have no way of predicting what is going to be behind them. Some of our users will be in sun-filled work sites while others will be in dark warehouses. Still others may be moving between the two. We needed to design an interface that could handle these varying environments without any action necessary on the part of the user. We addressed this need by designing a high contrast interface that always presents bright white against dark blue at strategically varying transparencies. As an example, the reticle at the center of the display (i.e. the cursor for head tracking) is a bright white ring that shows up in any normal condition. When looking into a bright environment however, the white ring becomes less apparent and the reticle’s dim blue inner ring visually emerges. Getting these small design details right makes for an interface that can stand up to any condition.
Keep it simple
Since this is such an early-stage technology, everyone that uses our product is going to be a novice to some degree. We have been careful not to overwhelm our users with overly complex rules, difficult to learn interactions, or unnecessary features. The bottom line when designing this type of system: keep it simple.
We have established basic design patterns within our system that we use repeatedly to create an easily learnable and predictable user experience. We have minimized the amount of menu types, buttons, or other interface elements to ensure that people only need to learn a couple things in order to understand the whole system. We have also limited the amount of text, relying instead on familiar graphical icons and visualizations for fast recognition. Finally, we have adhered to a simple color palette that features just a handful of colors that consistently convey specific meanings. The end effect is a perception of simplicity despite the unfamiliarity of this new product paradigm.
Delight
Work is difficult enough for most. They may have long work shifts, hazardous environments, or be faced with endless stress. Utility and usability are our top priorities when designing for these people, but we still believe in the importance of creating an enjoyable product experience as well. While designing Skylight, we took opportunities to design interfaces that were so intuitive that they create a satisfying experience when interacting with them. For example, our lists views can be scrolled like a wheel simply by looking towards the top or bottom of the list. You have never experienced an interface like this before, so the result is honestly fun. We have also designed subtle animations into the interface, such as info panels that slide in-and-out or points of light that animate open like clovers. The intended effect is an interface that seems life-like and interesting to interact with.