Publication: Towards General-Purpose Neural Network Computing
Open/View Files
Date
2015
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Eldridge, Schuyler, Amos Waterland, Margo Seltzer, Jonathan Appavoo, and Ajay Joshi. 2015. "Towards general-purpose neural network computing." In Proceedings of the 2015 International Conference on Parallel Architecture and Compilation (PACT), San Francisco, CA, October 18-21, 2015: 99-112. doi:10.1109/PACT.2015.21
Research Data
Abstract
Machine learning is becoming pervasive, decades of research in neural network computation is now being leveraged to learn patterns in data and perform computations that are difficult to express using standard programming approaches. Recent work has demonstrated that custom hardware accelerators for neural network processing can outperform software implementations in both performance and power consumption. However, there is neither an agreed-upon interface to neural network accelerators nor a consensus on neural network hardware implementations. We present a generic set of software/hardware extensions, X-FILES, that allow for the general-purpose integration of feedforward and feedback neural network computation in applications. The interface is independent of the network type, configuration, and implementation. Using these proposed extensions, we demonstrate and evaluate an example dynamically allocated, multi-context neural network accelerator architecture, DANA. We show that the combination of X-FILES and our hardware prototype, DANA, enables generic support and increased throughput for neural-network-based computation in multi-threaded scenarios.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service