Storage virtualization solutions
If you’ve worked with virtualization technology, regardless of whether it was on server or desktop projects, you’ve probably already experienced storage performance problems.
How virtual computing breaks storage
With the one-to-one server-to-application architecture that was prevalent in client/server computing, the operating system on each server had one I/O stream with which to deal, and it could optimize that stream somewhat to take some of the “random-ness” out of it before it was actually written to disk. In virtual computing, where you have one physical server that may be hosting 8 to 12 virtual servers or 50 – 70 virtual desktops, each of which is generating its own I/O stream, the operating system cannot optimize for any one of them without impacting the performance of the others. As a result, the I/O patterns that virtual environments generate are far more random and write-intensive than they were in your physical server environment .
Legacy storage architectures, spinning disk in particular, perform at their worst with very random, very write-intensive I/O patterns. The rotational latencies and seek times of spinning disk start to dominate data transfer times, and IOPS/spindle drops precipitously. SSD doesn’t necessarily help much because, while it is very good at handling reads and sequential writes, it is only slightly faster than high performance spinning disks at handling random writes unless it is front-ended by additional technologies designed to minimize the randomness of the workload. And we know that SSD is quite expensive — easily $60 - $65 per GB at list prices.
The three approaches that most people consider when faced with these storage performance issues are: 1) adding more disk spindles, 2) upgrading to a higher-end, more expensive class of disk array, or 3) buying SSD — all of which drive up storage costs significantly .
Virsto calls this storage performance degradation phenomenon the “VM I/O blender” . And if you think it’s bad in virtual server environments, wait until you see what happens in virtual desktop environments. Due to a number of issues — increased randomness and greater variances between peak and average IOPS — VDI storage must be overbuilt even more, causing the storage cost problem to be even worse.
The impact of the VM I/O blender
In sizing the performance requirements of your environment, you know how many IOPS you need per server or per desktop. Because you get fewer IOPS out of each spindle in virtual environments than you did in physical environments, you need more spindles. More spindles means more cost, and it means that, for any given storage configuration, you support fewer server or desktop images . This decreased density causes additional cost increases in virtual computing environments.
This has implications for snapshot/clone technology as well. With conventional snapshot technologies, the more snapshots and/or clones you create, the slower they all run. Given that you’re already chasing performance, this might cause you to look at very high priced, enterprise-class, array-based snapshot technology. This again drives storage costs up .
Native thin-provisioning virtual disk options offered by various hypervisor platforms might be attractive if storage space consumption is pushing up storage costs. But these options perform much more slowly than fully allocated virtual disks until they reach a “steady state”. Since you’re already chasing performance as it is, and you’re adding or removing VMs all the time, you’ll probably select the fully allocated option if you can’t afford the enterprise-class array. This effectively means that you are forced to drive poor storage capacity utilization to get performance .
The bottom line: the VM I/O blender drives storage performance down and storage costs up. It is clear a new approach is required to improve the economics of storage in virtualized environments.
Just as the hypervisor did for servers, Virsto is delivering new levels of efficiency, affordability and performance for storage in virtualized environments with a new VM-centric storage hypervisor .