Understanding and Mitigating Network Interference on High-Performance Computing Systems
Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
On most high-performance computing platforms, concurrently executing jobs share network resources. This sharing can lead to inter-job network interference, which can have a significant impact on the performance of communication-intensive applications. In this dissertation we focus on understanding and mitigating inter-job network interference on systems built with the fat-tree topology, a network architecture that is currently deployed in many of the top supercomputers in the world. We first analyze network congestion caused by multi-job workloads on a production fat-tree based system Cab, and establish a regression model to relate network hotspots to application performance degradation. The model shows that the typical routing strategy for fat-tree networks is ineffective at balancing network traffic and mitigating interference. We propose an alternative adaptive routing strategy, which we call adaptive flow-aware routing. We implement our strategy on Cab, and tests show up to a 46% improvement in job run time when compared to default routing. However, any reactive, routing-based approach to mitigating inter-job interference cannot guarantee low worst-case interference. A better approach—in that it completely eliminates the interference—is to implement scheduling policies that proactively enforce network isolation for every job. Existing schedulers that allocate isolated partitions lead to lowered system utilization, which creates a barrier to adoption. Accordingly, we design and implement Jigsaw, a new scheduling approach for three-level fat-trees that overcomes this barrier by explicitly allocating nodes and links to jobs in such a way that system fragmentation is reduced while full bandwidth is guaranteed to each job. This is made possible by constraints on node and link allocation that we develop and prove are necessary for full partition bandwidth. Jigsaw typically achieves system utilization of 95-96%, within a few percentage points of a standard scheduler's utilization. In scenarios where jobs perform better without interference, Jigsaw typically leads to lower job turnaround times and higher throughput than traditional job scheduling; in the worst-case scenario where jobs do not, these metrics are typically only 5-8% worse. Thus, in this dissertation we contribute two new strategies to help mitigate (or even eliminate) inter-job network interference. Both strategies can improve the performance of applications run on modern fat-tree based HPC clusters.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeComputer Science