Author(s): Geoff Blaber
Nvidia has become a torch-bearer in the world of GPU-based deep learning and artificial intelligence.
This week, at the GPU Technology Conference in San Jose, Nvidia and Arm announced a partnership that aims to make it easier for chip manufacturers to incorporate deep learning capabilities into the next generation of mobile devices and Internet of things (IoT) products. Arm will integrate Nvidia's open-source Deep Learning Accelerator (NVDLA) architecture into its Project Trillium platform. Although this announcement went largely "under the radar" at the event, CCS Insight believes it is hugely significant for Arm, Nvidia and the broader industry.
NVDLA brings a host of benefits to speed up adoption of deep learning inference. It's a free open-source architecture that addresses the need for a standard, consistent way to design deep learning inference accelerators. NVDLA is supported by the company's suite of developer tools including a forthcoming version of TensorRT, an inference optimizer and runtime that enables the portability of neural models across platforms.
The intention is to help scale the market by supporting quick implementation, a move that was underpinned by Nvidia’s announcement at the conference in 2017 that it would make NVDLA open-source. The source code was made available in the fourth quarter of 2017. The company predicts this effort will help IoT chipset makers build machine learning into its products and chip designs for edge devices.
In February 2018, Arm announced its Project Trillium, a series of scalable processor designs including object detection variants that are optimized for machine learning (see Mobile World Congress 2018: 5G and Semiconductors). The collaboration with Nvidia should simplify the integration of artificial intelligence into IoT chipsets for a broad array of connected devices.
It's notable that Arm's parent company, SoftBank, invested $4 billion in 2017 to amass a 5 percent stake in Nvidia. Integration of the latter’s artificial intelligence tools with Project Trillium means that a vast range of IoT devices powered by Arm's technology stand to benefit from Nvidia's leadership and performance optimizations in artificial intelligence, which will facilitate high-performance artificial intelligence in IoT and other edge devices.
The move also reinforces Nvidia’s position. Although its GPUs are dominant in training and inference, it lacks the same strength in edge devices, where Arm is king. Nvidia’s Jetson platform has momentum in specific segments within robotics, and the smart city with Metropolis, but the NVDLA partnership substantially extends reach including that of TensorRT.
This is a huge positive development for the entire industry. It means the complete ecosystem now benefits from performance, flexibility and consistency in deep learning inferencing from the data centre to the network edge. Scale and rate of learning are crucial to the continued advancement of artificial intelligence. Workflow consistency from the data centre to edge devices is a very significant enabler.
CCS Insight will publish a full report from the GPU Technology Conference 2018 next week.
Sign up to our free Daily Insight service here.