The Promise of Virtualization
When companies started to virtualize their servers, the main reason was to save money. By consolidating their servers, companies were able to drive up the utilization of their compute resource and reduce the total number of physical servers in their datacenters. Not only did companies save money on their servers, but they also saved money on power, cooling and space in the datacenter. However, as companies have progressed from initial server consolidation to fully virtualized server workloads, they have realized that, on the whole, they haven’t saved as much money as they originally expected to.
The Hidden Cost of Server Virtualization
When companies look at their server virtualization project after 3 years, they realize that almost all the money saved on purchasing fewer servers has been spent on additional storage. The server virtualization transformation led companies to purchase commodity servers but attach high-end storage to support the virtualized workloads running on these commodity servers.
While servers have been transformed by the hypervisor so companies can get the most out of these resources, the same hasn’t happened for storage. To understand why storage needs to be re-architected for virtualized environments, it’s helpful to follow the I/O from Virtual Machines (VMs) to the storage layer.
In the picture above, when storage I/O is generated from the applications running in each VM, it is usually ordered and sequential. While I/O is not completely sequential, applications have been written to deliver more and more sequential I/O to improve application performance. However, that all changes when multiple I/O streams hit the server Hypervisor.
The Hypervisor (applicable to ESX, ESXi and Hyper-V) acts as a multiplexer to the VMs’ I/O, interleaving the I/O of all the VMs on the host. As a result, the I/O coming from the Hypervisor is mixed up. This I/O is what is presented to the underlying storage layer.
When this I/O stream hits the storage layer, it looks like random I/O. Random I/O presents a challenge for traditional storage because with spinning disks, the mechanics of laying down random writes on disk causes rotational latencies and prolongs seek times. The consequence of this type of I/O is that storage performance, particularly write performance, degrades.
In Virsto’s testing, even 4 VMs on a single host will have a noticeable degradation of storage performance (IOPS). At 8 VMs on a host, there is a 38% degradation in IOPS for the storage system. This means that companies aren’t getting the storage performance they paid for when they purchased their storage. Keep in mind, in today’s datacenters, many companies have VM densities much higher than 4 or 8 VMs on a host. How bad is the storage layer performing in these multi-core, high density VM environments?
Introducing the Virsto Storage Hypervisor
Virsto has developed a new approach for storage in virtualized environments. Virsto delivers purpose-built software defined storage with its VM-centric storage hypervisor, which provides a set of high-performance data services.
The I/O Optimization data service addresses the I/O randomness due to the server Hypervisor.
The Virsto Storage Hypervisor sits on each host and presents a virtual storage appliance (VSA) to the VMs on the host. The VMs see Virsto as a new storage mount point, which means Virsto is in the I/O path for the VMs. Virsto offloads the I/O from the Hypervisor in a more efficient manner.
With the I/O Optimization data service, the Virsto Storage Hypervisor will perform a set of actions on the I/O so it is sequentialized and sent to the storage layer in orderly, logical blocks.
The result is that storage performance does not degrade. As companies scale up their virtualized infrastructure and increase the VM density on their hosts, the storage layer performs to its rated level in terms of IOPS.
In addition, latency improves as well. A Virsto customer saw their latency drop by 93% on reads and 56% on writes. A question that might come up is why does the read latency decrease when the I/O Optimization data service sequentialized just the writes. This improvement is based on the principle of locality of reference. When data is needed, if all the blocks of data are in the same area on the disk, then the disk doesn’t need to spin as much in order for the disk head to access the data. This speeds up the reads as well as the writes.
The I/O Optimization data service is only one of four such services in the Virsto Storage Hypervisor.
The challenge with snapshots in a virtual environment is two-fold. The first is the ongoing degradation of the storage layer after a snapshot is taken as well as the space needed for the snapshot. With the Virsto Storage Hypervisor, snapshots have been architected to not only be space-efficient but also not degrade storage performance.
Similar to snapshots, clones degrade storage performance and take up space. With the Virsto Storage Hypervisor, clones have been transformed so they are space-efficient as well as have a minimal effect on the performance of the storage layer.
Thin provisioning seems interesting to companies since it uses very little storage when a VM is created and the VM can be provisioned quickly. However, there is an ongoing performance hit due to the multiple operations that need to be done at the storage layer when data is written to the VM. With the Virsto Storage Hypervisor, the performance penalty of thin provisioning has been eliminated, so companies can get space-efficiency and fast provisioning without performance degradation.
The Virsto Storage Hypervisor transforms storage to meet the needs of virtualized environments today, and the vision of the software defined datacenter of tomorrow. By addressing the performance, capacity and agility challenges that legacy storage systems struggle with in virtualized environments, the Virsto Storage Hypervisor is the purpose-built technology needed to deliver specific data services at the virtual machine level: truly software-defined storage that does for storage what server virtualization delivered for servers.