Even Without AR, Size Remains the Achilles Heel of Smart Glasses
While integrating AR into smart glasses requires many size-consuming components, even normal wearable features take up precious PCB real estate. Now, Facebook says it has re-engineered components to fit in a lightweight frame.
The idea of using smart glasses for augmented reality is not new. The first prototype was developed in 1997 by a research team from Columbia University.
The Columbia University AR system was designed for urban exploration. The user had to wear a backpack and head-worn display while holding a hand-held display and its stylus. Following this prototype system, smart glasses gradually evolved into the familiar compact versions we witness today.
With the Columbia University AR system, the user needed to wear a backpack, a head-worn display, a hand-held display, and its stylus. Image used courtesy of S. Feiner
While AR smart glasses have been on the market for years, their adoption has been slow-rolling for several consumer- and design-level hang-ups. Some of these major roadblocks include challenging user-device interaction, limited computational power, and short battery life. One of the biggest challenges, however, is the minuscule form factor.
Despite these challenges, tech giants like Facebook seem determined to make smart glasses as pervasive as other wearables, like smartwatches. In fact, Facebook recently released "Ray-Ban Stories," which enable wearers to take hands-free, point-of-view photos and videos.
With size being one of the biggest challenges that smart glasses manufacturers face, engineers must find new ways to miniaturize the physical dimensions of the system to enable all-day use.
AR Smart Glasses Call for a Long List of Components
Most developers see the future of smart glasses intertwined with augmented reality interfaces. AR integration, however, means a plethora of components—which seem to stand in the way of small smart glass designs.
For one, in order to superimpose computer-generated information onto the physical objects of the real world, commercial smart glasses (such as Google Glass) place a see-through display in the eyeline of the wearer. At least one camera is required for capturing the videos and images. There are also smart glasses that include depth cameras or infrared vision that are relatively bulkier and more heavyweight.
With all the demands of modern wearables, smart glasses usually require a long list of hardware components (as pictured in this list of Google Glass tech specs)—a challenge when designers are dealing with small PCB real estate. Image used courtesy of Google
Microphones are also embedded into smart glasses for receiving voice commands and making phone calls. Typical smart glasses are equipped with multiple other sensors, such as accelerometers, gyroscopes, and magnetometers. These sensors allow users to monitor his or her own motion, be it stationary, walking, or running. Accelerometers and gyroscopes stabilize images captured by the camera.
Smart glasses also commonly use GPS to determine the current position of the wearer and support geo-location-based applications, like driving directions.
Example of the hardware components that may appear in smart glasses. Image used courtesy of the Khalifa University of Science
A processor with sufficient computational power is also needed to process the information gathered from different sensors and produce the required output. Another key part of the system is the battery that should be able to provide the system power for a sufficient amount of time.
Consequences of the Form Factor Requirement
Although today’s miniaturized electronic components have made it possible to have very compact smart glasses, there are still some major limitations: only very small displays can be employed, the embedded processors are relatively weak, and the battery life is short.
Moreover, these devices do not have a touchscreen, which users have come to expect of mobile devices and wearables. Instead, smart glasses employ several different interaction methods, such as voice commands, head movement, and hand gesture detection. With these limited interaction methods, however, it can be difficult to use smart glasses for a complicated task. Even simple tasks, like entering a password, might require an inconvenient process.
Google Glass uses a see-through display to show the virtual content. Image used courtesy of Google
For example, one group of researchers proposed a multi-step process for entering a leakage-resilient password: performing a simple gesture on the touchpad, rotating the head slightly, then speaking numbers based on the hidden information shown on the near-eye-display of smart glasses.
Considering these limitations, it's dubious whether AR smart glasses will ever be adopted at the scope of smartphones or smartwatches. However, it seems that specialized smart glasses—including industrial glasses, smart helmets, and sports coaching glasses—have the potential for adoption in certain applications.
Facebook's Ray-Ban Stories is a recent example of specialized smart glasses. The current version of Ray-Ban Stories does not include a display and is not actually designed for AR applications.
These smart glasses are designed to do only a few things:
- Capture photos and video
- Share photos and videos across Facebook's services using a companion app
- Listen to music
- Take phone calls through the near-ear speakers embedded in the arms of the frames
Keeping the number of the supported features low has enabled a compact form factor. Adding cameras and the other required electronic components only slightly increased the dimensions of the new glasses compared to the corresponding classic frames.
Ray-Ban Stories are only slightly larger than their corresponding standard frame. Image used courtesy of Lucas Matney and TechCrunch
Even Without AR, Form Factor Is a Challenge
Commenting on the design challenges of their smart glasses' form factor, the Facebook press release reads:
"We had to re-engineer components so that everything—that’s two cameras, a set of micro-speakers, a three-microphone audio array, an optimized Snapdragon processor, a capacitive touchpad, a battery, and more—fit into the smallest possible space and the lightest possible frame."
Ray-Ban Stories have a button on the right arm of the frame that allows users to take a 30-second video or a photo. Photos and videos can be captured hands-free through voice commands as well. A hard-wired LED lights up to make other people aware when a wearer is taking a photo or video. The captured photos and videos are post-processed when uploaded to the phone to enhance the performance of the Ray-Ban Stories’ dual 5MP cameras.
Ray-Ban Stories uses a three-microphone audio array along with beamforming technology and noise-cancellation algorithms to enhance the voice quality for calls and videos. Facebook claims that the calling experience is like what you’d expect from dedicated headphones. This wearable comes with a specially designed case that also acts as a portable charger. A fully charged case gives wearers an additional three consecutive days of glasses use.
Featured image used courtesy of Ray-Ban
While small form factor is one of the greatest constraints for smart glasses, there are a number of other design challenges preventing this wearable from being widely adopted. What factors do you see holding these devices back in popularity? Share your thoughts in the comments below.