A Parallelized Implementation of Raptorq Using Nvidia Cuda
CitationHuang, De Zhi. 2018. A Parallelized Implementation of Raptorq Using Nvidia Cuda. Master's thesis, Harvard Extension School.
AbstractThe recent increase of users of cloud computing and internet technologies is causing the demand for data to explode. Data loss during transmission often leads delay or corruption of data packets. As example, Cisco predicts that consumer video traffic will dominate other type of traffics by 2019, taking 80% of the global market. Video streaming can be vastly affected by as little as 5% of loss causing glitches, frame drop, tearing of video. A back channel is commonly necessary to relay the loss information back to sender and loss data will be retransmitted. Since the sender has to keep track of the status of each its client, this is considered an expensive option. With the recent advancement in computer hardware, Forward Error Correction (FEC) algorithms have become a viable and economical option for protecting data against loss without using a back channel.
IETF RFC 6330 RaptorQ Forward Error Correction algorithm has gained much interest in research and practice in recent years. Its linear runtime and high data recovery probability are making it an appealing solution. This RFC uses a patented technique called Inactivation Decoding (ID method) which is a technique that combines belief-propagation and Gaussian Elimination to attain linear runtime for both its encoder and decoder. Despite its benefits, ID method is not suitable for throughput-oriented architecture hardware like the general processing Graphic Processing Unit (GPU). This project comes up with a highly parallelized implementation of RaptorQ encoder and decoder for GPU; and compares its performance against an open source version. Although at the proposed version did not outperform the ID method, this is mainly due to the limitation of my current hardware but is still sufficient to protect data in real-time. The proposed method could best the CPU implementation on newer GPU.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364551