PublisherThe University of Arizona.
DescriptionGroup project with Kray Althaus, Josue Ortiz, Kevin Siruno, and Sebastian Thiem
AbstractDemodulation of high speed, wide bandwidth waveforms has typically been performed by application-specific integrated circuits (ASICs) or large central processing unit (CPU) clusters. However, development times for these technologies can be long and expensive, especially in the case of ASICs. While general purpose processors are flexible and easier to program, they often lack the throughput required to handle high speed waveform demodulation. Graphics processing unit (GPUs) open new avenues for flexible demodulators that can be developed quickly, easily modified, and easily maintained. This project seeks to harness the power of GPU parallel processing to increase our effective throughput while maintaining a low implementation loss. To do this, our design utilizes an industry-leading GPU development environment, CUDA, and the highest throughput GPU on the market, RTX 2080, to optimize the repetitive processes during demodulation. With our system, we can exploit these processes through parallelization while maintaining a modest implementation loss. The results of this design produce a large speed-up of 132 Mb/s, impressive compared to the 2 Mb/s of our CPU control system, with a bit error rate within 1 dB of Shannon’s theoretical rate for an 8-phase shift-keying detector.