Program Transformations and Sparse Computations in Performance Portability Libraries
Author
Neth, BrandonIssue Date
2024Advisor
Strout, MichelleLowenthal, David
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
High performance computing is an important tool in various domains, including climate modeling, drug discovery, and recently, generative AI\@.Application performance and developer productivity are important considerations when developing these codes. However, optimizing a program's performance for one machine can inadvertently harm its performance for another. With the ever-growing diversity of computing hardware, it has become more difficult to write code that performs well on multiple different machines, that is, to write code that is \textit{performance portable}. A similar phenomenon occurs with changes of input data and algorithm choice as well. Often, developers must maintain multiple versions of their application, one tuned for each system it will be used on. Performance portability libraries address this problem by separating the description of a computation from its optimization. Using these libraries, optimizations like tiling, parallelization, and data layout transformations can be quickly and portably applied. Unfortunately, they leave out important opportunities for optimizations that are applied across multiple loops. A programmer who wishes to improve data reuse through loop fusion or inter-loop layout changes cannot do so portably. Similarly, for codes using sparse data structures, the limited available abstractions require code to be format-dependent. Thus, the performance improvements of changing sparse formats come at the cost of rewriting entire kernels for the new format. This dissertation remedies these problems, introducing abstractions for cross-loop schedule and data transformations and format-independent abstractions for describing sparse computations into a representative performance portability library, RAJA\@. First, I introduce an interface for cross-kernel scheduling transformations through the RAJALC framework.The framework uses runtime symbolic evaluation to partially automate and ensure the correctness of the transformations. On average, this framework requires about a quarter as many code changes while achieving up to 98\% of the performance improvement of a hand-implemented transformation. Second, I build on the symbolic evaluation capabilities to support as-automated-as-desired format transformations. Further, I augment RAJA's iteration space capabilities to support triangular iteration spaces. As with the schedule transformations, performance improvement is achieved with significantly fewer code changes. Finally, I develop prototype support for the format-independent description of sparse computations. Because the description of the computation is independent of the sparse format, changing sparse formats becomes as simple as with dense. The hypothesis was that because the approach to separating the sparse format from the computation description has the same algorithmic complexity as a hand-written sparse loop nest, the performance would be comparable. However, evaluation shows that there is significant runtime overhead due to the implementation needing to perform checks in the innermost loops and other constant time work. I conclude that more significant changes to performance portability library interfaces will be needed to support the higher level of coupling between data and loops within general sparse computations. Still, the current abstractions are capable of supporting cross-loop scheduling and data transformations for dense codes efficiently.Type
Electronic Dissertationtext
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeComputer Science