Memory-Constrained vectorization and scheduling of dataflow graphs for hybrid CPU-GPU platforms
Research output: Contribution to journal › Article › Scientific › peer-review
|Journal||ACM Transactions on Embedded Computing Systems|
|Publication status||Published - 1 Feb 2018|
|Publication type||A1 Journal article-refereed|
The increasing use of heterogeneous embedded systems with multi-core CPUs and Graphics Processing Units (GPUs) presents important challenges in effectively exploiting pipeline, task, and data-level parallelism to meet throughput requirements of digital signal processing applications. Moreover, in the presence of system-level memory constraints, hand optimization of code to satisfy these requirements is inefficient and error prone and can therefore, greatly slow down development time or result in highly underutilized processing resources. In this article, we present vectorization and scheduling methods to effectively exploit multiple forms of parallelism for throughput optimization on hybrid CPU-GPU platforms, while conforming to system-level memory constraints. The methods operate on synchronous dataflow representations, which are widely used in the design of embedded systems for signal and information processing. We show that our novel methods can significantly improve system throughput compared to previous vectorization and scheduling approaches under the same memory constraints. In addition, we present a practical case-study of applying our methods to significantly improve the throughput of an orthogonal frequency division multiplexing receiver system for wireless communications.
- Dataflow models, Design optimization, Heterogeneous computing, Signal processing systems, Software synthesis