Silos will prevent Tier 1 Apps reaching the Cloud

On a recent excursion to a tech event I had the pleasure of meeting a well-known ‘VM Guru’, (who shall remain nameless). Having read some of this individual’s material I was excited and intrigued to know his thoughts on how he was tackling the Storage challenges related to VMware especially with Fibre Channel SANs.

“Storage, that’s nothing to do with me, I’m a VirtGuy”, he proudly announced.

To which I retorted, “yes but if there are physical layer issues in your SAN fabric, or poorly configured Storage etc. it will affect the performance of your Virtual Machines and their applications, hence surely you also need some visibility and understanding beyond your Server’s HBAs?”

Seemingly annoyed with the question, he answered, “Why? I have SAN architects and a Storage team for that, it’s not my problem. I told you I’m a VirtGuy, I have my tools so I can check esxtop, vCenter etc…” as he then veered off into glorious delusions of grandeur of how he’d virtualized more servers than I’d had hot dinners. As fascinating as it was to hear him, it was at this point that my mind was side tracked into realizing that despite all the industry talk of ‘unified platforms’, ‘Apps, Servers & Storage as a Service’ i.e. the Cloud, the old challenge of bridging the gap between silos still had a long way to go.

Let’s face it Virtualization and the Cloud have brought unprecedented benefits but they’ve also brought challenges. One such challenge that is dangerously being
overlooked is that of the silos that exist within most IT infrastructures. Indeed it’s the silos that have led to the new phenomenon that is coined as, ‘The Virtual Stall’. The Virtual Stall was never an issue several years ago as Virtualization was happily adopted by Application owners to consolidate many of their ‘Crapplications’ that meant little or nothing to them and certainly didn’t carry the burden of a SLA. Storage teams were none the wiser as VM admins requested large capacities of storage for their VMFS and despite the odd performance problem no one was too bothered as these VMs rarely hosted Tier 1 Apps. With the advent of VDI, large VM backups and critical applications such as Exchange and SQL being virtualized, the ordeal of maintaining performance took root, resulting in the inevitable ‘blame game’ between silos. Fast forward to today and despite all the talk of Private Clouds, the fear factor of potential performance degradation resulting in the virtualization of mission critical applications has led to the ‘Virtual Stall’.

Business and Management have been convinced of the benefits of consolidation, reduction in data foot print, power/coolin
g etc. that they initially saw with the virtualization of their low tier applications. This has led them to want more of the same for higher end applications leading to what many organiz
ations are terming a ‘VMware First’ policy. Under pressure from them the silo of the application owners still don’t have a true understanding of server virtualization and hence are reluctant for their Tier 1 apps to be migrated from their physical platforms. At best they may accept two mission critical VMs on a physical server. Under pressure to prove the Application owners wrong and maintain the performance of virtualized applications, the silo of the VMware administrators will often over-provision from their pool of Memory, CPU and storage resources. Furthermore the VM Admin silo also lack a real understanding of Storage and at best will think in terms of capacity for their VMFS stores, while Storage Admin will think in terms of IOPS. As this lack of understanding and communication between the silos exists and grows so too do the challenges of making the most of the benefits of server virtualization.

One of the key mistakes is that it’s often over looked that whether on a virtualized or non-virtualized platform, application performance is heavily affected by its underlying storage infrastructure. The complexity of correctly configuring storage in accordance to application demands can range from deciding the right RAID level, number of disks per LUN, array cache sizes to the correct queue depth and fan-in / fan-out ratio. These and other variables can drastically influence how I/O loads are handled and ultimately how applications respond. With virtualized environments the situation is no different, with Storage related problems often being the cause of most VMware infrastructure mis-configurations that inadvertently affect performance.

Even with the option of Raw Device Mapping, the alternative for VMware storage configuration, VMFS is often the most preferred due to its immediate advantages in terms of provisioning and zoning. In this method several Virtual machines are able to access the same LUN or a pool of LUNs. This becomes far more simplistic as opposed to a one to one mapping ratio that is required for each LUN for each Virtual Machine with the RDM option. Additionally this makes backups far easier as the VMFS for the given Virtual Machines need only be dealt with instead of numerous individual LUNs that are mapped to many Virtual Machines. VMFS volumes can be as big as 2TB and with the concatenation of additional partitions which are termed VMFS extents, this can then be as large as 64TB i.e. 32 extents. With a Storage Admin unaware of such distinctions within VMware, it’s easy to also be unaware of the best practices with extents, such as creating these on new physical LUNs to facilitate additional LUN queues or throughput congestion. Coupled with this, if extents are not assigned the same RAID and disk type you quickly fall into a quagmire of horrendous performance problems. In fact it can be pointed out that the majority of VMware performance problems are in fact initiated at the beginning of the provisioning process or even earlier at the design phase and are a result of the distance between the silos.

As mentioned already application owners will pressure VM administrators to overprovision Memory and CPU to avoid any potential application slowdowns, while the VM administrator will falsely think along the lines of capacity for their VMFS in terms of Storage. At best a VM Admin may request the RAID level and the type of Storage e.g. 15K RPM FC disks but it is here that the discrepancy arises for the Storage administrator. The Storage Admin, used to provisioning LUNs on the basis of application requirements, will instead not be thinking of capacity but rather in terms of IOPS and RAID levels. Eventually though as there is no one to one mapping and the requested LUN is to be merely added to a VMFS, the storage administrator, not wishing to be the bottleneck of the process, will proceed to add the requested LUN to the pool. Herein is also the source of a lot of eventual performance problems as overtly busy LUNs begin to affect all of their aligned virtual machines as well as those that share the same datastore. Moreover if the LUN is part of a very busy RAID group on the backend of the storage array, such saturated I/O will impact all of the related physical spindles and hence all of the LUNS they share. What needs to be appreciated is that the workload of individual applications presented to individual volumes will be significantly different to that of multiple applications being consolidated onto a single VMFS volume. The numerous I/Os of multiple applications alone even if sequential, will push the Storage array to deal with these numerous requests as random, thus requiring different RAID level, LUN layout, cache capacity etc. considerations than those for individual applications.

Once these problems exist there is a customary troubleshooting procedure that VM and Storage administrators often follow which take from the metrics found in vCenter, esxtop, vscsiStats, IOMeter, Solaris IOSTAT, PerfMON and the Array management tool. This somewhat laborious process usually includes measuring the effective bandwidth and resource consumption between the VM and storage, moving and using other paths between the VMs and storage and even reconfiguring cache and RAID levels. To have even got to this point days if not weeks would have been spent in checking for excessive LUN and RAID group demands, understanding the VMFS LUN layout on the backend of the storage’s physical spindles, investigating the array’s front end, cache and processor utilization as well as bottlenecks on the ESX host ports. Some may even go to the lengths of playing around with the Queue Depth settings, which without an accurate insight is at best a guessing game based on rule of thumb. Despite all of these measures there is still no guarantee that this will identify or eliminate the performance issues, leaving VMware to be erroneously blamed as the cause or that the application is ‘unfit’ to be virtualized. Ironically so many of these problems could have been proactively avoided had there been a better understanding and communication between the silos in the design and provision phase.

While it could be argued that Application, Server / VM and Storage teams all have their own expertise and should stick to what they know, in today’s unified Cloud-driven climate remaining in a bat cave of ignorance justified by the knowledge that you’re an expert in your own field is nothing short of disastrous. Application owners, VMware and Storage Admin have to sit and communicate with each other and destroy the first barrier erected by silos i.e. knowledge sharing. This does not require that a Storage Admin set up a DRS cluster or a VM Admin start provisioning LUNs but what it does mean is that as projects roll out a common understanding of the requirements and the challenges be understood. As the technology brings everything into one stack with vStorage APIs, VAAI and terminology such as orchestration that describe single management panes which allow you to provision VMs and their Storage with a few clicks, the need for the ‘experts’ of their field to sit and share their knowledge has never been greater. Unless the challenge of breaking the silos is addressed we could be seeing Kate Bush’s premonition of Cloudbursting sooner than we think.


The True Optimum Queue Depth for VMware / vSphere

An array’s Queue Depth in its most basic terms is the physical limit of exchanges that can be open on a storage port at any one time. The Queue Depth setting on the HBA will specify how many exchanges can be sent to a LUN at one time. Generally most VM Admins leave their Queue Depth settings at the manufacturer’s default with only the requirement to facilitate a small number of I/O intensive VMs/servers leading them to make an increase. The risk with changing or in fact not changing Queue Depths to their optimum can have severe detrimental effects on performance where any outstanding I/O queuing can cause bottlenecks. For example if Queue Depth settings are set too high the Storage ports will quickly become overrun or congested leading to poor application and VM performance or even worse data corruption or loss. Alternatively if Queue Depth settings are set too low, the Storage ports become underutilized thus leading to poor SAN efficiency. On the other hand should the Queue Depth be correctly optimized, performance of VMs and their corresponding LUNs can be vastly improved, hence the requirement for a methodology to accurately determine this is an imperative.

Generally VM Admins use esxtop to check for I/O Queue Depths and latency with the QUED column showing the queuing levels. With VirtualWisdom though, end users are now empowered with the only platform that can measure real-time aggregated queue depth regardless of storage vendor or device i.e. in a comprehensive manner that takes into consideration the whole process from Initiator to Target to LUN. VirtualWisdom’s unique ability to do this ensures accurately that storage ports are optimized for maximum application health, performance, and SAN efficiency.


The esxtop QUED column


So to begin with it is important to prevent the storage port from being over-run by considering both the number of servers that are connected to it as well as the number of LUNs it has available. By knowing the number of exchanges that are pending at any one time it is possible to manage the storage Queue Depths.

In order to properly manage the storage Queue Depths one must consider both the configuration settings at the host bus adapter (HBA) in a server and the physical limits on the storage arrays. It is important to determine what the Queue Depth limits are for each storage array. All of the HBAs that access a storage port must be configured with this limit in mind. Some HBA vendors allow setting HBA and LUN level Queue Depths, while some allow HBA level setting only.

The default value for the HBA can vary a great deal by manufacturer and version and are often set higher than what is optimal for most environments. If you set the queue depths too low on the HBA it could significantly impair the HBA’s performance and lead to under utilization of the capacity on the storage port (i.e. underutilizing storage resources). This occurs both because the network will be underutilized and the storage system will not be able to take advantage of its caching and serialization algorithms that greatly improve performance. Queue Depth settings on HBAs can also be used to throttle servers so that the most critical servers are allowed greater access to the necessary storage and network bandwidth.

To deal with this the initial step should be to baseline the Virtual environment to determine which servers already have their optimal settings and which ones are either set too high or too low. Using VirtualWisdom real time Queue Depth utilization can be reported for a given period. Such a report will show all of the initiators and the maximum queue depths that were recorded during the recording period. This table can be used as a method to compare the settings on the servers to the relative values of the applications that they support. The systems that are most critical should be set to higher Queue Depths than those that are less critical, however Queue Depth settings should still be within the vendor specified range. Unless Storage ports have been dedicated to a server, VirtualWisdom often shows that optimum Queue Depth settings should be between the ranges of 2-8, despite industry defaults tending to be between 32-256. To explain this further, VirtualWisdom can drill off a report that can show in descending order the Maximum Pending Exchanges and their corresponding initiators and server names. The Maximum Pending Exchanges are not only the maximum number of exchanges pending during the interval being recorded but also the exchanges that were opened in previous intervals that have not yet closed.

Report on Pending Exchanges i.e. HBA Queue Depth

So for example if a report such as this was produced for 100 ESX servers it’s important to consider whether your top initiators are hosting your highest priority applications and whether your initiators with low queue depth settings are hosting your lowest priority applications. Once the appropriate Queue Depth settings have been determined, an alarm can be created for any new HBAs that are added to the environment, especially any HBA that violates the assigned Queue Depth policy.

Once this is established the VirtualWisdom dashboard can be then be used to ensure that the combined Pending Exchanges from all of the HBAs are well balanced across the array and SAN fabric.