Exploiting Data Characteristics in The Design of Accelerators for Deep Learning
Author | : Patrick H. Judd |
Publisher | : |
Total Pages | : 0 |
Release | : 2019 |
ISBN-10 | : OCLC:1334506246 |
ISBN-13 | : |
Rating | : 4/5 ( Downloads) |
Download or read book Exploiting Data Characteristics in The Design of Accelerators for Deep Learning written by Patrick H. Judd and published by . This book was released on 2019 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: The recent "Cambrian explosion" of Deep Learning (DL) algorithms in concert with the end of Moore's Law and Dennard Scaling has spurred interest in the design of custom hardware accelerators for DL algorithms. While DL has progressed quickly thanks in part to the abundance of efficient parallel computation provided by General Purpose Graphics Processing Units, newer DL algorithms demand even higher levels of compute density and efficiency. Furthermore, applications of DL in the mobile and embedded domains demand the energy efficiency of special purpose hardware. DL algorithms are dominated by large matrix-vector product computations, making them ideal targets for wide Single Instruction Multiple Data architectures. For the most part, efficiently mapping the structure of these computations to hardware is straightforward. Building on such designs, this thesis examines the data characteristics of these computations and proposes hardware modifications to exploit them for performance and energy efficiency. Specifically, this thesis examines the sparsity and precision requirements of Deep Convolutional Neural Networks, which comprise multiple layers of matrix-vector product computations. We propose a profiling method to find per layer reduced precision configurations while maintaining high classification accuracy. Following this, we propose three accelerator designs that build on top of the state-of-the-art DaDianNao accelerator. 1) Proteus exploits the reduced precision profiles by adding a light weight memory compression layer, saving energy in memory access and communication, and enabling larger networks in a fixed memory budget. 2) Cnvlutin exploits the presence of zero, and near zero, values in the inter-layer data by applying sparse compression to the data stream while maintain efficient utilization of the wide memory and compute structure of the SIMD accelerator. 3) Stripes exploits the reduced precision profiles for performance by processing data bit-serially, compensating for serial latency by exploiting the abundant parallelism in the convolution operation. All three designs exploit approximation, in terms of reduced precision and computation skipping to improve energy efficiency and/or performance while maintaining high classification accuracy. By approximating more aggressively, these designs can also dynamically trade-off accuracy for further improvements in performance and energy.