The company so far has released few technical details, but revealed its vision of DynamIQ technology being used in a range of applications, from consumer devices, cloud computing, and, notably, in machine learning and artificial intelligence applications. It is expected the next generation Cortex-A processors, which will be part of the DynamIQ architecture, will be announced later this year.
Image courtesy of ARM.
The DynamIQ architecture will take advantage of ARM’s dynamic big.Little heterogeneous computing configuration, in which a powerful core CPU can be paired with more energy efficient ones. All of the processors have access to the same memory and are “seen” by the operating system as one whole processor, allowing the processors to jump and take over tasks interchangeably as needed.
When the system requires more processing power, it can take advantage of the available “big” processor and, of course, when the system is taking on lighter tasks, the “Little” processors then take over. This configuration, which was first introduced by ARM in 2011, is intended to provide the best of performance and energy efficiency.
The innovation in DynamIQ will be the highly customizable and scalable configurations of big.Little processors, which then can be paired with up to eight processor cores per cluster in any “big” to “Little” ratio. Each cluster can have its own processing and power demands. This means that, on a single device, you can have different clusters responsible for different types of tasks to maximize performance. This dynamic balance is what makes the DynamIQ architecture especially interesting for engineers working on machine learning and artificial intelligence applications.
It is expected DynamIQ will begin appearing on the market by 2018.
Highly Configurability and Scalability as the Way Forward
Many companies recognize the essential role of the configurable architecture when it comes to machine learning and artificial intelligence applications. To keep up with growing computational requirements while minimizing power consumption, reconfigurability provides the flexibility needed to move between application demands.
Altera, acquired by Intel in late 2015, has developed deep learning FPGA accelerator packages which allow customers to take advantage of the FPGA platform for machine learning applications. Altera markets the FPGA platform as being efficient in the creation of neural networks, which mimic the brain’s structure of interconnected neurons. FPGAs are highly configurable, have high onboard memory bandwidth (8TBps), and are power efficient, which makes them ideal for real-time computing for large amounts of data.
NVIDIA has had a slightly different approach, announcing its Tesla P100 data center accelerator for machine learning applications. The accelerator uses the Pascal GPU architecture, which can be scaled via the NVLink interconnect. This permits large-scale configurations of servers and, with NVIDIA’s DGX SATURNV being ranked as the world’s most efficient super computer in late 2016 (powered by the Tesla P100 accelerators), the company is also focusing on increasing power while improving energy efficiency.
Featured image courtesy of ARM.