Performance Gains in Conjugate Gradient Computation with Linearly Connected GPU Multiprocessors
View/ Open
Published Version
https://www.usenix.org/conference/hotpar12/performance-gains-conjugate-gradient-computation-linearly-connected-gpuMetadata
Show full item recordCitation
Stephen J. Tarsa, Tsung-Han Lin, and H.T. Kung. 2012. Performance gains in conjugate gradient computation with linearly connected GPU multiprocessors. Proceedings of the 4th USENIX Workshop on Hot Topics in Parallelism (HotPar'12), June 7-8, 2012, Berkley, CA: 1-7.Abstract
Conjugate gradient is an important iterative method used for solving least squares problems. It is compute-bound and generally involves only simple matrix computations. One would expect that we could fully parallelize such computation on the GPU architecture with multiple Stream Multiprocessors (SMs), each consisting of many SIMD processing units. While implementing a conjugate gradient method for compressive sensing signal reconstruction, we have noticed that large speed-up due to parallel processing is actually infeasible due to the high I/O cost between SMs and GPU global memory. WE have found that if SMs were linearly connected, we could gain a 15x speedup by loop unrolling. We conclude that adding these relatively inexpensive neighbor connections for SMs can significantly enhance the applicability of GPUs to a large class of similar matrix computations.Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAPCitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:11859330
Collections
- FAS Scholarly Articles [18292]
Contact administrator regarding this item (to report mistakes or request changes)