BranchyNet: Fast inference via early exiting from deep neural networks

DSpace/Manakin Repository

BranchyNet: Fast inference via early exiting from deep neural networks

Citable link to this page


Title: BranchyNet: Fast inference via early exiting from deep neural networks
Author: Teerapittayanon, Surat; McDanel, Bradley; Kung, H. T.

Note: Order does not necessarily reflect citation order of authors.

Citation: Teerapittayanon, Surat, Bradley McDanel, and H. T. Kung. 2016. "Branchynet: Fast inference via early exiting from deep neural networks." In Pattern Recognition (ICPR), 2016 23rd International Conference on, pp. 2464-2469
Access Status: Full text of the requested work is not available in DASH at this time (“dark deposit”). For more information on dark deposits, see our FAQ.
Full Text & Related Files:
Abstract: Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.
Published Version: doi:10.1109/ICPR.2016.7900006
Citable link to this page:
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)


Search DASH

Advanced Search