Storage performance virtualization via throughput and latency control

Abstract
I/O consolidation is a growing trend in production environments due to increasing complexity in tuning and managing storage systems. A consequence of this trend is the need to serve multiple users and/or workloads simultaneously. It is imperative to ensure that these users are insulated from each other by virtualization in order to meet any service-level objective (SLO). Previous proposals for performance virtualization suffer from one or more of the following drawbacks: (1) They rely on a fairly detailed performance model of the underlying storage system; (2) couple rate and latency allocation in a single scheduler, making them less flexible; or (3) may not always exploit the full bandwidth offered by the storage system.This article presents a two-level scheduling framework that can be built on top of an existing storage utility. This framework uses a low-level feedback-driven request scheduler, called AVATAR, that is intended to meet the latency bounds determined by the SLO. The load imposed on AVATAR is regulated by a high-level rate controller, called SARC, to insulate the users from each other. In addition, SARC is work-conserving and tries to fairly distribute any spare bandwidth in the storage system to the different users. This framework naturally decouples rate and latency allocation. Using extensive I/O traces and a detailed storage simulator, we demonstrate that this two-level framework can simultaneously meet the latency and throughput requirements imposed by an SLO, without requiring extensive knowledge of the underlying storage system.

This publication has 9 references indexed in Scilit: