News

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms

August 22, 2017 by Heather Hamilton-Post

AI labs race to develop processors that are bigger, faster, stronger.

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.

With major companies rolling out AI chips and smaller startups nipping at their heels, there’s no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, they’re all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

Qualcomm Neural Processing Engine

The Verge reports that Qualcomm’s processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. They’re taking a slightly different approach though—adapt existing technology that utilizes Qualcomm’s strengths. They’ve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

 

Image courtesy of Qualcomm.

 

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomm’s website says that it may also be used to help a device’s camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications--”from healthcare to security, on myriad mobile and embedded devices,” they write. They also boast superior malware protection.

“It allows you to choose your core of choice relative to the power performance profile you want for your user,” said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomm’s SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Google Cloud TPU

Google’s AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Google’s got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Google’s AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expect—given that the chip’s affordability—that users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

 

Image courtesy of Google.

 

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle 180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a pod—previously, it took a full day.

Intel Movidius Neural Compute Stick

Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. It’s meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, “Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.”

 

Image courtesy of Movidius.

 

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuning workflow, provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA Tesla V100

NVIDIA was the first to get really serious about AI, but they’re even more serious now. Their new chip—the Tesla V100 is a data center GPU. Reportedly, it made enough of a stir that it caused NVIDIA's shares to jump 17.8% on the day following the announcement.

 

Image courtesy of NVIDIA.

 

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores, Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, “enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.”

 


 

Heard of more AI chips coming down the pipe? Let us know in the comments below!