News

Supporting the AI Boom: Facing the Challenges of Hardware and Deployment

June 11, 2021 by Rushi Patel

AI has been a hot topic recently, especially this week. With all the efforts coming out, there is a growing need for better hardware, collaborating, and easing deployment.

Just this week alone, AI has been buzzing around the news. From Xilinx's new chipset family to Mythic pushing AI farther to the edge, companies are steadily rolling out more and more AI-focused hardware. 

In the past decade, the advances in computing hardware have skyrocketed the use of artificial intelligence (AI) in many diverse applications, including home and industrial automation, security, surveillance, healthcare, autonomous driving, etc.

AI hardware needs to support these intensive computing applications, which are growing at an exponential rate. Also, the hardware architecture will most likely get more complex going forward, which could present a challenge for many electrical engineers working on design and testing. As a result, the semiconductor industry is constantly developing solutions such as GPU, FPGA, ASIC, and power management products targeted for AI applications.

Though hardware architectures are a significant challenge when creating and designing AI components, power, as always, is a crucial problem.

 

Challenges in AI Hardware Development

AI hardware typically includes GPU, FPGA, ASIC, and supporting power supplies. In addition, a few essential user requirements like higher load capability (more memory and bandwidth), better speeds (higher FLOPS), low power consumption, low latency, and finally, cost-effectiveness, all while needing to be sufficiently compact. 

Suppose the costs, power consumption, and overall efficiency of this hardware are not good enough. In that case, it can undoubtedly defeat its purpose, and in the long term, it may stagnate the growth of the technology before reaching its full potential.  

With this in mind, it becomes imperative to consider each of these challenges and requirements to best design hardware that can overcome and surpass these constraints, pushing farther and farther to the edge. 

 

Teaming-up to Tackle AI Edge Computing

Recently, AI chipmaker Hailo announced that it is partnering with Lanner Electronics, a provider of edge computing appliances, to combine its solutions to support the demands of emerging AI applications at the edge. 

Both companies claim to offer low latency, low power, and cost-effective edge AI hardware solutions that could help scale AI and enable mission-critical applications such as video analytics, traffic management, access control, etc.  

With this announcement, the spotlight was mainly on the LEC-2290 and Hailo-8 AI acceleration module. 

 

The LEC-2290 for AI edge computing.

The LEC-2290 for AI edge computing. Image used courtesy of Lanner

 

The LEC-2290 uses a Coffee Lake S processor and about 30 W idle/122 W full load power consumption. 

As for the Hailo-8, the Hailo-M.2 module is an AI processor with up to 26-TOPS, 2.8-TOPS per Watt, and uses a proprietary processor architecture different from the Von Neumann architecture, which is typical for most neural processors. 

 

Hailo-8™ M.2 AI acceleration module.

The Hailo-8™ M.2 AI acceleration module. Screenshot used courtesy of Hailo

 

Hailo states that the dataflow architecture allows low-power memory access owing to a distributed memory fabric combined with pipeline elements. 

Through teaming up and combining each other's strengths, the outlook for improving edge computing with AI hardware looks promising. 

Another company that is focusing on compute hardware, but from a different angle is a startup called Lightelligence

 

Nanophotonics-based Computing Hardware

A startup spinout from MIT, Lightelligence, is developing nanophotonics-based integrated circuits to address computing and energy efficiency bottlenecks of traditional electronic architecture. 

Lightelligence mentions that, for example, in the optical domain, arithmetic computations are done with physics instead of with logic gate transistors that require multiple clocks, and more clocks mean a slower time to get a result. In addition, the company aims to accelerate certain linear algebra operations to perform quick, power-efficient tasks like those found in artificial neural networks. 

 

Lightelligence’s optical AI accelerator prototype.

Lightelligence’s optical AI accelerator prototype. Image used courtesy of Lightelligence

 

The company claims that its technology delivers improvement in two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency for conventional learning tasks. Although a promising technology, the demonstration is based on a prototype. It would be interesting to see if it can reach the production level and how the device performs compared to existing solutions. 

Since there is such an industry boom on AI, a question that comes to mind is how difficult it will be to deploy? One company that is working on a possible solution is NVIDIA. 

 

Boosting Confidence in AI Deployment

To address scalability, security, functionality and ensure faster deployment and adoption of AI, NVIDIA, with its ecosystem partners, has created "NVIDIA-Certified Systems™." 

The company claims that this would provide performance-optimized hardware and software solutions to run a broad range of AI workloads. 

The systems include NVIDIA A100, A40, A30, or A10 Tensor Core GPUs and NVIDIA BlueField-2 DPUs or NVIDIA ConnectX-6 adapters. This ecosystem significantly popularizes AI applications due to adoption from various companies such as Advantech, ASUS, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo, and Supermicro. 

 

NVIDIA-Certified Systems for all workloads.

NVIDIA-Certified Systems for all workloads. Image used courtesy of NVIDIA

 

This ecosystem solution helps AI a place for standardization and ease of deployment. Since there is so much new technology being developed, it can be difficult for companies to decide how to implement some of it and create their system. More solutions to ease deployment are sure to be in the future at the rate AI is currently moving. 

 

Looking at the Future of AI Hardware

Tech giants such as Alphabet, Amazon, and Facebook have AI at the core of their business, and they actively partner with governments and do business with corporations. 

This strategy has been helping the transformation of healthcare, agriculture, and education while promoting the growth of economies. 

AI is also considered a powerful tool to tackle present and future global-scale challenges, including climate change. Vast amounts of data are generated to train, test, and run through the algorithms that make up AI.

With an exponential increase in data generated through various channels, it is essential to find ways to create energy-efficient hardware and processes that are scalable and manageable to run wide-ranging AI workloads optimally. 

Efforts from the semiconductor industry, startups, and universities seem to be setting an optimistic, aggressive, and favorable outlook for improvements in AI hardware.