Yesterday, SiFive, a fabless semiconductor company that produces chips based on RISC-V, announced a new open-source SoC (system-on-chip) development platform based on the RISC-V and NVDLA architectures.
RISC-V is an instruction set architecture (ISA), like x86 or the ARM architecture, that's gained traction in part because it's open source in nature.
What Is NVDLA?
NVDLA (NVIDIA Deep Learning Accelerator), for its part, is an accelerator specifically designed to use modular architecture. According to NVIDIA's primer on NVDLA, "NVDLA hardware is comprised of the following components", listing a convolution core, a single data processor, a planar data processor, a channel data processor, and dedicated memory and data reshape engines, each independently configurable and dedicated to different tasks that a system may or may not require.
Two examples of NVDLA systems. Image courtesy of NVIDIA.
NVDLA has been open source for over a year. It's been showing up in various releases, including Arm's Project Trillium and, earlier this month, Marvell's "AI SSD controller proof-of-concept architecture solution"—both projects aim to aid in scaling data management by bringing machine learning into processing.
The IP Problem of Developing Custom SoCs
In an interview with AAC last December, Shafy Eltoukhy, SVP and GM of SiFive's SoC Division, explained why IPs can present such a hurdle for SoC developers: "...you cannot build a chip by itself based on your idea alone. You really need to be able to use third-party IPs with your own IP so that you can differentiate yourself—a large portion of the costs of building an SoC are the third party IPs...By the time you add the costs of all the IPs up, you may end up with a few million dollars just to license IPs from third parties."
SiFive has invested in custom SoC development with its DesignShare program, which aims to help SoC designers select and engage with various IPs without incurring prohibitive costs. By developing and building SoCs with open-source architectures like RISC-V and NVDLA, the company hopes to broaden the program's accessibility and scope.
Just today, SiFive also announced a new addition to the DesignShare program, ASIC Design Services, which will bring CDL (core deep learning) technology to the program.
Coinciding Machine Learning Announcements from NVIDIA
Yesterday, NVIDIA highlighted its new generation of their GPUs, the RX 2000. The NVIDIA "Turing chip" has been anticipated in the consumer market due to its ability to produce high-quality graphics for applications like gaming. It is so named because the RX 2000 GPUs feature what they call "Turing Tensor Cores" (which may sound familiar to those who read about Google's TPUs, tensor processing units).
Because tensor-based chips allow for more powerful processing (NVIDIA claims its Turing RTX graphics cards are six times more powerful than its previous generation, the Pascal-based GTX series), they've become a veritable buzzword among those looking to further bring machine learning and AI capabilities out of the lab and into the consumer market.
NVIDIA's accounting of the development of AI, machine learning, and deep learning. Image courtesy of NVIDIA
Paired with the use of NVDLA in emerging machine learning initiatives, this is evidence that NVIDIA has ambitions of becoming core to the AI revolution as it goes from the realm of research to the realm of hardware.
Do you have experience in SoC development or machine learning? Give us your perspective on this week's news in the comments below.
Featured image courtesy of SiFive.