Reducing resource waste in HPC through co-allocation, custom checkpoints, and lower false failure prediction rates

Date issued

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

ItemDissertationOpen Access

Abstract

Bigger systems are being deployed by High Performance Computing centers in order to fulfill the needs of modern scientific and big data applications as well as to match the increased amount of users in said systems. This thesis explores three methods to reduce wasted computational resources on modern HPC systems with thousands of components. The approaches explored here increase job throughput in HPC systems using co-allocation, reduce unnecessary checkpoints that are triggered after failure predictions and improve checkpoint intervals for common jobs with medium probability of failure. To accomplish these goals, the work first presents a new node sharing strategy for batch systems and shows how it can increase scheduling throughput when compared to standard node allocation methods. Secondly the thesis proposes a new optimal checkpoint interval for jobs with short to medium runtimes that can reduce the expected overhead from checkpointing. Finally it introduces a node failure prediction method tailored to big HPC systems that reduces false positive rates. This thesis offers therefore new insights into the efficiency deficiencies that follow from job failures and resource under-utilization as HPC systems grow in size, while also proposing three techniques that help alleviate said deficiencies.

Description

Keywords

Citation

Relationships