Undressing Victoria – Could Hitachi's New VSP Rock the EMC boat?



Back in 2004 HDS launched the USP, which was then followed by the great but not so radically different USP-V in 2007. Within that same time frame, HDS’ main rival in the Enterprise Storage market EMC, busily went about launching the Symmetrix DMX-3, then the DMX-4 and most recently the VMAX. Launching so-called revolutionary features such as FAST, (which HDS had been doing previously for years i.e. Tiered Storage Manager) EMC’s marketing machine quickly created an atmosphere wherein the Storage World became obsessed with all things ‘V’ namely VSphere, VMAX and VPLEX. With marketing so powerful that it extended to international airport posters advertising EMC’s ability to ‘take you to the Private Cloud’, you could easily forgive Hitachi for possibly becoming complacent and content with being a company renowned by the masses for just making great vacuum cleaners. Well thank goodness, after three years in the making, a codename of Victoria, a semi-decent marketing campaign and a ‘V’ to its final name, HDS have at last launched the new VSP Enterprise array…and yes it’s been worth the wait.

Marketed as a 3D scaling storage system, it was pleasing to realize that it wasn’t a reference to the tinted glasses needed to look at its rather revolting vomit green cabinet. (So yes it certainly can’t compare to the Knight Rider looks of the VMAX and probably won’t be appearing in an episode of ‘24’). Aesthetics aside and more importantly though the 3D refers to the terms scale up, scale out and scale deep. What HDS mean by this is that you can scale up by adding more resources to the VSP system, you can scale out by adding more disk blocks, host connections and Virtual Storage Directors, and you can scale deep by virtualising external heterogenous arrays behind the VSP. From this premise it’s also evident that HDS are looking at the VSP to be the foundational block of the recently announced but yet to be released cloud platform, the UCP.

While the scale deep is an old tradition that HDS have mastered for years, it’s easy to note that the scale out and Virtual Storage Directors terms bear more than a passing resemblance to the concept introduced by EMC’s VMAX. With four Virtual Storage Directors in each system and with four cores within each Virtual Storage Director the VSP houses a total of 16 cores. Essentially the masterminds of the machine, Virtual Storage Directors are responsible for managing the VSPs internal operations such as mapping and partitioning. The VSP can then be expanded into a mammoth system of 32 Cores by combining two VSP systems using the PCIe Hitachi Data Switch, scaling up to 2048 drives with a Terabyte of cache. So while an EMC aficionado may immediately point out that the VMAX can offer 128 cores, which dwarfs the VSP’s 32 Cores, it’s worth remembering that with Storage Virtualization the number of cores that can potentially be housed behind the VSP are in the hundreds.

Another point, there is no equivalent to the USPVM - the mini-me USPV which couldn’t scale up to the size of its big brother. Instead the VSP starts as a single pair of Virtual Storage Directors with no internal storage that can act as a pure virtualization platform to homogenize externally attached multi-vendor arrays. With such a proposition, one can just imagine the quivering of DS8000s, VMAXs, Clariions and EVAs ‘confettied’ within datacenters now faced with the prospect of being marginalized as a portion of the potential 255PB of LUNs that can sit behind the VSP’s Directors.

Of course this is also a great sales pitch to eventually get the same VSP stacked up later with internal storage that can range from 256 SSDs in either STEC’s 200GB 2.5-inch or 400GB 3.5-inch format as well as up to 2.5PB of 3.5-inch SATA drives. Add to that HDS have taken the pioneering route of adopting the capability to house up to 1.2 PB of SAS 2.5-inch drives. Yes, that’s right the HDS VSP has a SAS backend and it’s ready to have a 6Gbps SAS interface. While I’m no fan of SSDs sitting on the backend of a Storage system behind a RAID controller, processors, SAN switches etc. (can’t wait for DRM to hit the mainstream market), nevertheless a full duplex SAS backend is a definite improvement in taking advantage of the IOPs and throughput capability of SSDs. With up to 128 paths out to the disks and solid state drives, HDS are calling this switching fabric the Grid Switch Layer. Of course when you add in the idea of 2.5 inch drives using less power, increase in IOPS due to a higher spindle count and a reduction of one less cabinet on your datacenter floor, you suddenly see a nice ROI figure being mustered up by your local HDS account manager. Expect EMC and co. to follow suit.

Also gone are those somewhat prehistoric battery backups that resided in the USPV and were legacy from the USP. Instead you will find between the aforementioned Grid Switch layer and the back end enclosures that the VSP hosts an extra layer of cache. This feature eliminates the need for the old battery backups. Instead the Virtual Storage Director’s data is stored in this cache and de-staged to solid state memory in the event of a power loss etc. hence ensuring data protection. It’s a simple idea but a welcome one for field engineers who can vouch for the pain of having to replace one of those battery packs. Indeed other legacy complications have been reduced due to the fact that the Control Memory (still responsible for all the metadata of the VSP’s operations) is now located on Virtual Storage Director boards and DIMMs, removing the requirement of separate dedicated Shared Memory and Control Memory boards.

Furthermore despite having borrowed the VMAX concept of coupling engines as well as using Intel processors for their Virtual Storage Directors, HDS have still retained a unique stamp by forsaking the Rapid IO interconnects chosen by EMC for their much more familiar Star Fabric architecture. So unlike EMC’s complete overhaul of their Direct Matrix architecture, HDS have maintained their non-blocking crossbar architecture switch to the back-end while having their global cache shared amongst multiple controllers. This familiar HDS method is the internal network of the VSP that manages its data via the Drives, Virtual Storage Directors, BEDs, FEDs and Cache.

So while HDS have inadvertently acknowledged EMC’s insight to go the Intel route they’ve also seemingly taken a leaf out of VMware’s DRS book by having custom I/O routing ASICs. Point being that on both the FEDs and BEDs of the VSP, data accelerator ASICs designed by Hitachi themselves, have now been built for managing the I/O traffic. Unlike the USPV where the ACP and CHP processors were tied to particular ports, the VSP instead makes a resource pool of CPU from which the ASICs can then assign to any front end or back end port that requires them at any given time. Personally I think this is a fantastic idea and step forward as it quickly eliminates a lot of the performance tuning that was previously required to get the same effect. With such a VMware-esque feature it’s somewhat ironic then that the VSP doesn’t yet support VAAI, although news is that it’s coming very soon.

Another ground-breaking step and one I’m most excited about is the VSP’s new Sub Lun Tiering feature. Using the now (thanks partly to Marc Farley’s terrific YouTube rant) infamous HDS 42MB page size, new policy based tiering will instead work on the page level instead of the LUN. Hence as a particular page becomes more or less active or “hot”, the VSP will automatically upgrade or downgrade the tier for that page only, regardless of whether it’s on external or internal storage. The objective here is pretty clear – an attempt to optimize your usage of SSDs so you can justify buying more of them. Also ironically what was once considered HDS’ Achilles heel with regards to storage efficiency, the 42MB page size now works out to be ideal. Imagine the nightmares of a smaller page size - valuable Storage Processors’ CPU utilized in the desperate search for numerous 50Kb page sizes that heat up and need to be moved up to tier 0; not a pretty thought. As this feature is sure to be emulated by other vendors it will be interesting to see what page sizes they’ll be coming up with.

Also speaking of other vendors, HP who recently achieved the takeover of the year with their purchase of 3PAR has also launched the VSP albeit with a much nicer cabinet and the OEM moniker of P9500. What is interesting here is that the P9500 (VSP) is clearly a higher range platform than the InServ arrays and if indications are correct HP have no intention of disbanding their EVA range (reports have already surfaced of an EVA now called P6000). So with the OEM deal still intact, HP currently has every intention of also marketing and pushing forward the VSP / P9500. Indeed while at a meeting at one of HP’s headquarters during the week of the P9500’s release I was delightfully told of the P9500 amazing APEX functionality. APEX sounded incredible as I was told of an application-level QOS control, which would give Pillar’s similar feature a run for their money. Strange then that I hadn’t heard of any such feature during the HDS launch. Upon further reading of APEX, it was explained that mission-critical data could be given bandwidth priority over less important data. It was then I suddenly realized something familiar. This was nothing but a remarketed version of HDS’ Server Priority Manager’s functionality which had been around for years (you’ve probably never heard of it because of HDS’ poor marketing but it’s actually very good). In fact the only uniqueness of APEX is that for HPUX platforms it does indeed allow the prioritization of CPU, cache and storage resources. So not really that significant a differentiator from the VSP especially if you don’t run HPUX (and to be honest I think they’d have more success pitching how much nicer their cabinet looks). Nonetheless, differentiators or not, the addition of the P9500 to HP’s storage portfolio will only add further credence to their growing status of a Storage powerhouse.

Another welcome addition / change is the replacement of the demonically slow Storage Navigator management GUI in place for a much faster and greener looking GUI. HDS have also announced a whole new refurbishment of their Command Suite software. As well as being quicker and more user friendly there’s also better integration with VMware allowing you to manage storage for Virtual machines. A welcome change for a SRM that often looked and performed in an outdated manner that was not befitting of the array (I still have nightmares of carving up LDEVs on the USP pre Quick Format days).

So with new features still to be released such as integration with VMware’s VAAI, support for FCOE and primary deduplication, the VSP has come a long way from its predecessor the USPV. Taking the best from their competitors and integrating it with their own way of doing things is not a new concept for HDS and with the VSP they certainly have done that. But HDS now have a genuinely new product which surpasses the minor gap filled in between the USP and USPV that successfully incorporates its characteristic tools such as dynamic provisioning and virtualization with bleeding edge technology such as Sub Lun Tiering. There will be inevitable criticisms from competitors. There will be inevitable squabbles between the vendors. There will be inevitable comparisons between arrays. One thing’s for sure though, expect a lot of the VSP’s new features to be incorporated in other upcoming arrays pretty soon, Hitachi or not. In the words of Simon Cowell, “Glad to see them back in the game!”


N.B. I received a great explanation and post about APEX from Calvin Zito - also known as HPStorageGuy. He clarifies the fact that there is more of a distinction than what was originally posted by me - or in his words "Bottom line, there is no HDS equivalent of APEX" (-:
Here's the link: http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Application-Performance-Extender-setting-the-record-straight/ba-p/83533#feedback-success