Silos will prevent Tier 1 Apps reaching the Cloud

On a recent excursion to a tech event I had the pleasure of meeting a well-known ‘VM Guru’, (who shall remain nameless). Having read some of this individual’s material I was excited and intrigued to know his thoughts on how he was tackling the Storage challenges related to VMware especially with Fibre Channel SANs.

“Storage, that’s nothing to do with me, I’m a VirtGuy”, he proudly announced.

To which I retorted, “yes but if there are physical layer issues in your SAN fabric, or poorly configured Storage etc. it will affect the performance of your Virtual Machines and their applications, hence surely you also need some visibility and understanding beyond your Server’s HBAs?”

Seemingly annoyed with the question, he answered, “Why? I have SAN architects and a Storage team for that, it’s not my problem. I told you I’m a VirtGuy, I have my tools so I can check esxtop, vCenter etc…” as he then veered off into glorious delusions of grandeur of how he’d virtualized more servers than I’d had hot dinners. As fascinating as it was to hear him, it was at this point that my mind was side tracked into realizing that despite all the industry talk of ‘unified platforms’, ‘Apps, Servers & Storage as a Service’ i.e. the Cloud, the old challenge of bridging the gap between silos still had a long way to go.

Let’s face it Virtualization and the Cloud have brought unprecedented benefits but they’ve also brought challenges. One such challenge that is dangerously being
overlooked is that of the silos that exist within most IT infrastructures. Indeed it’s the silos that have led to the new phenomenon that is coined as, ‘The Virtual Stall’. The Virtual Stall was never an issue several years ago as Virtualization was happily adopted by Application owners to consolidate many of their ‘Crapplications’ that meant little or nothing to them and certainly didn’t carry the burden of a SLA. Storage teams were none the wiser as VM admins requested large capacities of storage for their VMFS and despite the odd performance problem no one was too bothered as these VMs rarely hosted Tier 1 Apps. With the advent of VDI, large VM backups and critical applications such as Exchange and SQL being virtualized, the ordeal of maintaining performance took root, resulting in the inevitable ‘blame game’ between silos. Fast forward to today and despite all the talk of Private Clouds, the fear factor of potential performance degradation resulting in the virtualization of mission critical applications has led to the ‘Virtual Stall’.

Business and Management have been convinced of the benefits of consolidation, reduction in data foot print, power/coolin
g etc. that they initially saw with the virtualization of their low tier applications. This has led them to want more of the same for higher end applications leading to what many organiz
ations are terming a ‘VMware First’ policy. Under pressure from them the silo of the application owners still don’t have a true understanding of server virtualization and hence are reluctant for their Tier 1 apps to be migrated from their physical platforms. At best they may accept two mission critical VMs on a physical server. Under pressure to prove the Application owners wrong and maintain the performance of virtualized applications, the silo of the VMware administrators will often over-provision from their pool of Memory, CPU and storage resources. Furthermore the VM Admin silo also lack a real understanding of Storage and at best will think in terms of capacity for their VMFS stores, while Storage Admin will think in terms of IOPS. As this lack of understanding and communication between the silos exists and grows so too do the challenges of making the most of the benefits of server virtualization.

One of the key mistakes is that it’s often over looked that whether on a virtualized or non-virtualized platform, application performance is heavily affected by its underlying storage infrastructure. The complexity of correctly configuring storage in accordance to application demands can range from deciding the right RAID level, number of disks per LUN, array cache sizes to the correct queue depth and fan-in / fan-out ratio. These and other variables can drastically influence how I/O loads are handled and ultimately how applications respond. With virtualized environments the situation is no different, with Storage related problems often being the cause of most VMware infrastructure mis-configurations that inadvertently affect performance.

Even with the option of Raw Device Mapping, the alternative for VMware storage configuration, VMFS is often the most preferred due to its immediate advantages in terms of provisioning and zoning. In this method several Virtual machines are able to access the same LUN or a pool of LUNs. This becomes far more simplistic as opposed to a one to one mapping ratio that is required for each LUN for each Virtual Machine with the RDM option. Additionally this makes backups far easier as the VMFS for the given Virtual Machines need only be dealt with instead of numerous individual LUNs that are mapped to many Virtual Machines. VMFS volumes can be as big as 2TB and with the concatenation of additional partitions which are termed VMFS extents, this can then be as large as 64TB i.e. 32 extents. With a Storage Admin unaware of such distinctions within VMware, it’s easy to also be unaware of the best practices with extents, such as creating these on new physical LUNs to facilitate additional LUN queues or throughput congestion. Coupled with this, if extents are not assigned the same RAID and disk type you quickly fall into a quagmire of horrendous performance problems. In fact it can be pointed out that the majority of VMware performance problems are in fact initiated at the beginning of the provisioning process or even earlier at the design phase and are a result of the distance between the silos.

As mentioned already application owners will pressure VM administrators to overprovision Memory and CPU to avoid any potential application slowdowns, while the VM administrator will falsely think along the lines of capacity for their VMFS in terms of Storage. At best a VM Admin may request the RAID level and the type of Storage e.g. 15K RPM FC disks but it is here that the discrepancy arises for the Storage administrator. The Storage Admin, used to provisioning LUNs on the basis of application requirements, will instead not be thinking of capacity but rather in terms of IOPS and RAID levels. Eventually though as there is no one to one mapping and the requested LUN is to be merely added to a VMFS, the storage administrator, not wishing to be the bottleneck of the process, will proceed to add the requested LUN to the pool. Herein is also the source of a lot of eventual performance problems as overtly busy LUNs begin to affect all of their aligned virtual machines as well as those that share the same datastore. Moreover if the LUN is part of a very busy RAID group on the backend of the storage array, such saturated I/O will impact all of the related physical spindles and hence all of the LUNS they share. What needs to be appreciated is that the workload of individual applications presented to individual volumes will be significantly different to that of multiple applications being consolidated onto a single VMFS volume. The numerous I/Os of multiple applications alone even if sequential, will push the Storage array to deal with these numerous requests as random, thus requiring different RAID level, LUN layout, cache capacity etc. considerations than those for individual applications.

Once these problems exist there is a customary troubleshooting procedure that VM and Storage administrators often follow which take from the metrics found in vCenter, esxtop, vscsiStats, IOMeter, Solaris IOSTAT, PerfMON and the Array management tool. This somewhat laborious process usually includes measuring the effective bandwidth and resource consumption between the VM and storage, moving and using other paths between the VMs and storage and even reconfiguring cache and RAID levels. To have even got to this point days if not weeks would have been spent in checking for excessive LUN and RAID group demands, understanding the VMFS LUN layout on the backend of the storage’s physical spindles, investigating the array’s front end, cache and processor utilization as well as bottlenecks on the ESX host ports. Some may even go to the lengths of playing around with the Queue Depth settings, which without an accurate insight is at best a guessing game based on rule of thumb. Despite all of these measures there is still no guarantee that this will identify or eliminate the performance issues, leaving VMware to be erroneously blamed as the cause or that the application is ‘unfit’ to be virtualized. Ironically so many of these problems could have been proactively avoided had there been a better understanding and communication between the silos in the design and provision phase.

While it could be argued that Application, Server / VM and Storage teams all have their own expertise and should stick to what they know, in today’s unified Cloud-driven climate remaining in a bat cave of ignorance justified by the knowledge that you’re an expert in your own field is nothing short of disastrous. Application owners, VMware and Storage Admin have to sit and communicate with each other and destroy the first barrier erected by silos i.e. knowledge sharing. This does not require that a Storage Admin set up a DRS cluster or a VM Admin start provisioning LUNs but what it does mean is that as projects roll out a common understanding of the requirements and the challenges be understood. As the technology brings everything into one stack with vStorage APIs, VAAI and terminology such as orchestration that describe single management panes which allow you to provision VMs and their Storage with a few clicks, the need for the ‘experts’ of their field to sit and share their knowledge has never been greater. Unless the challenge of breaking the silos is addressed we could be seeing Kate Bush’s premonition of Cloudbursting sooner than we think.