Publication: Sparse-coded net model and applications
No Thumbnail Available
Open/View Files
Date
2016-09
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Gwon, Youngjune, Miriam Cha, William Campbell, H. T. Kung, and Charlie K. Dagli. "Sparse-coded net model and applications." In Machine Learning for Signal Processing (MLSP), 2016 IEEE 26th International Workshop on, pp. 1-6. IEEE, 2016.
Research Data
Abstract
As an unsupervised learning method, sparse coding can discover high-level representations for an input in a large variety of learning problems. Under semi-supervised settings, sparse coding is used to extract features for a supervised task such as classification. While sparse representations learned from unlabeled data independently of the supervised task perform well, we argue that sparse coding should also be built as a holistic learning unit optimizing on the supervised task objectives more explicitly. In this paper, we propose sparse-coded net, a feedforward model that integrates sparse coding and task-driven output layers, and describe training methods in detail. After pretraining a sparse-coded net via semi-supervised learning, we optimize its task-specific performance in a novel backpropagation algorithm that can traverse nonlinear feature pooling operators to update the dictionary. Thus, sparse-coded net can be applied to supervised dictionary learning. We evaluate sparse-coded net with classification problems in sound, image, and text data. The results confirm a significant improvement over semi-supervised learning as well as superior classification performance against deep stacked autoencoder neural network and GMM-SVM pipelines in small to medium-scale settings.
Description
Other Available Sources
Keywords
Terms of Use
Metadata Only