Leap Motion, a hand tracking company, has just raised $50 million funding for its continued development of its hand tracking system. Why is there a massive focus on gesture control for VR and where is VR/AR currently being implemented?

Leap Motion's Capital Gain

Despite advances in computing power, one of the major limitations in technology is us and our senses. While we may have computers that are capable of generating environments in real time, our interaction with said environments is still mostly via keyboard and mouse. We still largely perceive these generated images displayed on screens with the same dimensional capabilities of the first TVs.

While 3D TVs and gesture control do exist, they are still in their infancy. Augmented reality (AR) and virtual reality (VR) are the next step in immersive technology, but they still seem fairly far in the future at this point. Google tried to enter the augmented reality market with their Google Glass but was criticized for the privacy policy (and the less-than-stylish fashion statement). However, such difficulties have not stopped many companies from trying to create products that will spark the VR/AR explosion, which could see many individuals throwing out their keyboards and mice for a VR/AR interface.

One of these companies, Leap Motion, with the help of J.P Morgan Asset Management, has raised $50 million in funding from multiple investors to pursue their gesture control for VR/AR technology. Such an investment is an indication that the industry has taken a genuine interest in changing how we interact with computers. So what does Leap Motion do and how does it work?

 

Leap Motion is making real progress in the world of gesture control. Image courtesy of Leap Motion.

 

The Leap Motion Platform

The Leap Motion Mobile VR platform is a combination of hardware and software to provide hand tracking so that the user can interact with a computer-generated environment with their hands as opposed to devices such as keyboards and mice. For comparison, the Oculus Rift may display an environment with perspective, but it's lacking in this ability to interact.

 

The Leap Motion VR headset. Image courtesy of Leap Motion.

 

The hardware is based around an ARM processor (CYUSB3014-BZXI), two small IR cameras, and several IR emitters. It can be mounted on most headsets to stream image data to a computer. The headset's dual infrared cameras, being wide angle lenses, obtain a nearly 180-degree field of vision.

Once the IR image has been captured, the ARM processor (a fast USB controller at 200MHz) stores this data in RAM and alters the data to optimize the resolution. Once this is done, the controller streams the data to a computer where Leap Motion Service decodes the images to produce the augmented reality data (i.e., the position of the user's hands and fingers).

Interestingly, the Leap Motion Software transmits the final VR/AR data over TCP, which allows for most internet-enabled devices to interact with the controller, including smart phones.

 

Other VR/AR Users

Leap Motion is producing products for use with augmented and virtual reality, but who are they selling their software to? What end applications currently exist for gesture control and how do they hold up against classic forms of interfacing?

As it turns out, there are plenty of companies who are selling or developing products with gesture control in mind. One example is the DJI Spark drone which is entirely controlled with gestures as opposed to a controller that one would expect to find on an areal vehicle. The drone, launched from the palm, automatically flies away from the user and can be controlled with hand gestures. For example, the user can raise their hand to instruct the drone to fly upward and then make a square with their fingers to have the drone capture a high-tech "selfie" picture of the user. 

However, while this may sound impressive, the gesture control has still yet to be improved as there are some concerns about the drone's responsiveness. But, if improved, such devices could prove critical in situations where a controller is impractical (such as disaster relief and military scenarios).

 

The DJI Spark, a drone controlled via gestures. Screenshot courtesy of DJI Tutorials.

 

Gesture controls are also finding their way into aid devices, including a glove that can be worn by those who are deaf. A hearing-impaired person may struggle to interact with interfaces such as Siri which rely on spoken commands. The gloves, still in research, can track hand movements and translate sign language into commands that can be understood by voice / text activated systems. 

NASA's also been looking into gesture control and virtual reality for the purpose of training future astronauts. The move comes from the impracticalities of controllers used during VR training and so NASA aims to replace such controllers with TENZR straps.

 

NASA VR systems would greatly benefit from gesture control. Image courtesy of NASA

 

The Importance of Gesture Control in VR/AR

Virtual reality and augmented reality are starting to become commercialized—by the Oculus Rift, for example. But their uses go far beyond the applications of entertainment and gaming. Virtual realities allow for users to immerse themselves in simulated environments, a concept that could be applied in many extraordinary circumstances.

For example, students could use such technology to view the world from different timelines and immerse themselves in an environment and live history instead of reading about it. Archaeologists could use VR to view a historical site as it stood and gain insights to how ancient civilizations would have viewed such monuments.

 

VR and AR would greatly benefit from gesture control. Image courtesy of BagoGames [CC-BY 2.0]

 

For augmented reality to become viable, however, it must be integrated with advanced gesture control. Augmented reality relies on creating an environment that is superimposed on the current surroundings (such as Google Glass), and so interaction through gesture control is a no-brainier. Augmented reality could benefit many industries just like virtual reality where they can change and shape the world around them.

Implementing gesture control allows for true interaction with the virtual environment which opens up endless possibilities for many industries. For example, engineering firms could use a virtual environment to simulate their product while users can interact with the device as if it were real to show potential problems and improvements. The medical field could greatly benefit from such technology by training surgeons in an environment where surgeons could improve their motor skills and gain insight that could only normally be found in real operations.

The applications are nearly endless and they could have serious repercussions on how we see the world and how we learn.

 

Read More

 

Where Is Gesture Control Now?

Gesture control is still in its infancy and needs plenty of work before it's ready for true mainstream acceptance. However, this is not a reason for potential investors and the industry as a whole to shy away from either developing or implementing gesture control. After all, the company that first produces flawless gesture control will have a huge portion of the market in their pocket.

While the mouse has been a great companion for the past few decades, the keyboard has been truly faithful ever since its inception with the typewriter. However, as stated before, computer technology is ever advancing and yet we continue to use technology that seriously impedes our ability to interact with it. That alone is enough to suggest that we may be on the verge of a transformation which would mark a pivotal moment in computing history.

 

Comments

0 Comments