Optimising IBM SVC Performance

As I wrap up my last couple of days working with Virtual Instruments, I've been working closely with our Marketing team to get as much technical content out as quickly as we can. We've just finished another video this time on optimising IBM's SVC performance. Here we've used real customer data (obviously scrubbed) and I think most folks will be surprised at the issues we find. IBM SVC is a great way of virtualising external storage and reaps immense benefits such as seamless migrations and quick I/O response times. Needless to say by having the right insight you can drastically improve the performance of your SVC cluster without incurring extra cost.
Here's the video: 

VI's Archie Hendryx Chinwags with Mike Laverick

If you've ever been involved with VMware it's more than likely that you've heard of Mike Laverick. As well as being a VM Guru, vExpert, all-round cool guy and author of great books such as "Administering VMware Site Recovery Manager 5.0", Mike also has a popular site of blogs, info and videos, namely RTFM.

Occasionally Mike invites certain vendors or tech folk for a "chinwag"and publishes this on his site and this week I was lucky enough to have the opportunity to discuss Virtual Instruments' solution and how it enables the successful virtualization of Mission Critical Apps.
Here's a link to the video:

I hope you enjoy it!


 

Avoid Storage Oversubscription Ratio Problems



Just finished up a quick two minute video presented by my colleague, Jim Bahn, on how to solve the common problem of SAN oversubscription ratio problems. Interestingly I've found the oversubscription problem with just about every end user I've been to this last year, so there clearly is a lack of awareness on the issue.



The biggest limitation is that you cannot historically trend the ASIC or Blade utilisation of a SAN Director with its native tools. I've sadly had customers that have had oversubscription issues at different times of the day on different Blades and ASICs e.g. nightly backups etc. One customer had a six month problem with their Oracle backups and when we deployed the VirtualWisdom software we found that this was due to the oversubscription. Every other night the backups were causing the Blades to momentarily have link resets which in turn would reset each of their tape drives causing 'shoe-shining' and the prolonged backups. There was no way we could have identified this without the historically trending feature unless we physically saw this happen on the switches at the exact moment it occurred. 


This is just one of many examples of problems that occur due to incorrect oversubscription. 

Anyhow enough of the anecdotes here's the video:







vSphere 6.0 - What's Needed?

Hi - Below are the details for what will be my final webcast for Virtual Instruments before I move on to another adventure. 
I hope you can join and look forward to your contribution and feedback!

vSphere 6.0 - What's Needed? - Webinar 23/2 

The new vSphere 5.0 storage features such as VAAI, Storage DRS, Site Recovery Manager 5 and VASA, highlight the fact that whether on a virtualized or non-virtualized platform, application performance is heavily affected by its underlying storage infrastructure. The silos that once existed between storage and VMware teams are now being challenged as vSphere 5 brings to the forefront the need for a common understanding and integration.So what is needed in the next version of vSphere to achieve the successful virtualization of Mission Critical Applications? 

Join Archie Hendryx, Virtual Instruments Senior Solutions Consultant & vExpert, as he discusses the way to counter such challenges and ensure a successful virtualization initiative that eliminates risk, optimizes performance and enhances business continuity and availability of key applications. 

Register for free now at: 
http://info.virtualinstruments.com/Webinar-vSphere022312_WhatsNew.html
February 23, 2012 at 9:00 AM PST, 5:00 PM GMT, 12:00PM ET


VCE FastPath, Virtual Instruments' SAN Probe & Hadoop - 2012 Storage Predictions

Yearly prediction blogs are so clichéd hence why I’ve always tried to avoid writing one. Despite this I’ve always made a mental note of technology, products or companies that I thought were going to really do well in the upcoming year. Back in 2008 I felt VMware were going to really take off after the release of 3.5. In 2009 I had a gut feeling DataDomain would explode just before they were bought by EMC. In 2010 I spoke to a friend about how 3PAR’s technology could no longer be ignored and in 2011 I still wasn’t convinced that FCoE would overtake FC in revenue despite all the analysts’ claims. But why believe me when I’d never put these thoughts on paper? So now at the beginning of 2012, I’ve decided to put my money where my mouth is, pull out my crystal ball and document my predictions.

First off I’m going with VCE’s Vblock and their new FastPath feature. VCE (or the company formerly known as Acadia) have always been an exciting prospect with their all-in-one Vblock solution. While other vendors such as HDS, HP and Dell all plot the launch of their own unified computing block, VCE have had the advantage of being the first on the market and consequently the first to learn and adapt their messaging and offering in accordance to customer needs. One such initiative is what is being coined as FastPath. In essence FastPath is a Wizard-GUI based deployment of a Vblock VDI infrastructure that’s based on best practice reference architecture that enables deployment to be accelerated from months to days. I’ve often blogged on the many benefits of VDI and the immense CAPEX and OPEX savings that come with it; to be honest it’s a no brainer. What I did fail to mention was the sometimes long drawn out and painful PoC process that would be required to prove out the value of a VDI deployment to a potential customer. Well, FastPath is the solution to that conundrum.

VCE Vblock - A solution not a 'box'
Available in pre-configured Vblocks, FastPath allows the customer to choose from a variety of products that scale according to their needs thus eliminating the risk of sizing errors and scaling out as needs grow. So if you have a requirement for 500, 1000 or 1500 desktop users, choose the appropriate preconfigured model and you’re ready to go as your VDI roll out is based on known capacities hence avoiding unnecessary pre-purchasing of hardware. Added to this the Vblocks are leveraging proven design and reference architectures via an installation wizard that focuses on performance and usage specific to your environment mitigating any risk to a VDI success. The Installation wizard immediately configures the VMware View components, as well as the connection broker and is completed in minutes, even creating and optimising the VDI storage layout that can be cloned as ‘Gold’ master images. 
VCE VBlock FastPath - Preconfigured with a single SKU

The business benefits are obvious in that customers can now accelerate their turnaround time from order to installation and enjoy a seamless roll out from PoC to Production. Your TCO is easily quantifiable as you know exactly what you’re acquiring, how it will operate and perform and how much the whole package costs. While FastPath is for VDI deployments, it wouldn’t be surprising to see VCE adopt a FastPath strategy for other Vblock deployments such as Oracle or SAP P to V migrations, or primary and secondary Vblock DR set ups that leverage Site Recovery Manager. The possibilities are numerous and 2012 could well be the year when FastPath transforms an erroneous mindset of Vblocks being a unified hardware computing block to instead being an all in one, quick to deploy and essential solution to the business.


Secondly is obviously a technology that I hold close to my heart having worked for the product’s company Virtual Instruments, namely the SAN Performance Probe. Initially VI were depending on Finisar technology for their probe products and their unique ability to track millisecond latency across Fibre Channel SAN infrastructures. Now with last year’s launch of their own SAN Availability Probe they’ve seen hardware sales rocket as they’ve empowered Storage, Server & VMware administrators to master the once complex art of FC SAN optimization via an easy to use dashboard GUI. What initially was seen as a FC SAN troubleshooting tool, it’s quickly becoming apparent via customer use cases that the value of the platform extends far beyond the realms of the SAN administrator. Already customers have found the SAN Availability Probe provides them the ability to de-risk disaster recovery, optimize backups, safeguard virtualization of Tier 1 applications as well as optimize the performance of their existent infrastructure while offsetting future procurement.
Virtual Instruments' SAN Performance Probe - Redefining Infrastructure De-risking
One of the keys to success for any company in such a highly competitive start-up market is to have a 'Blue Ocean Strategy', an experienced and top class leadership and a vision for the future. The reality is the product has no competitor and while this may have irked some vendors into producing FUD that this is not the case, it speaks volumes that a company which has yet to reach the 200 employee mark could be rattling the cages of such big corporations. Add to the mix that you have an executive team that includes a legend of the industry such as former Symantec CEO John W.Thompson and veterans from EMC, McData and HDS as VPs of Sales, Pre-Sales, Marketing and Services, it’s not surprising that a new product from a relatively new start up can so easily walk into large enterprise accounts and justify their unique value. As VI’s customer base will inevitably grow in 2012, so too will the SAN Performance Probe's use cases and consequent business value.

Lastly is a technology that was named after a kid’s toy elephant - Hadoop. Like all great things in life this Java-based programming framework is free. Part of the Apache project and partly invented by Google to help them present back to their users meaningful results from all the information they were indexing and collecting, Hadoop is the solution to what will be the term of 2012 i.e. ‘Big Data’. The long standing problem that Google and their like faced i.e. lots of structured and unstructured data and the challenge of having to run process intensive analytics was always an expensive proposition when put in the context of a traditional centralized database system. So instead of being limited to a single disk mapped to eight processors, Hadoop simply breaks up an application into numerously small fragments which can then be run on any node in a cluster. Hence In a cluster of servers that each have eight CPUs, Hadoop will send your code across those numerous servers enabling you to run your indexing job with all those processors working in parallel, quickly and efficiently and still return your results as a single readable whole.

Silly logo aside, Hadoop is key to the serious market of Big Data
With the Hadoop framework being already adopted by the likes of Yahoo, IBM and Google, 2012 could well be the year when Hadoop moves beyond search engine sites and find more prominence in the retail and finance sector. That is not to say that current datawarehouses or transaction processing systems are about to be ripped out of these sectors. Instead when these traditional databases reach their peaks, running Hadoop will enable further analysis across multiple data feeds in a single platform at a relatively cost effective price. So for example in the finance sector Hadoop will easily find a useful space in the context of identifying transaction fraud where large data sets for modelling and backtesting need to be created. Other use cases could include supporting compliancy by using Hadoop for the daily processing of equity markets data or even utilizing it for the consolidation of datawarehouses that run loan, banking and credit card consumer products. As for the retail sector, their drive towards cost-effective solutions to deal with their growing amount of consumer and product information is another ideal for a Hadoop based solution. What retail outlet wouldn’t want to provide an online customer experience that provided a product search result comparable to that of Google’s? In fact such is Hadoop’s potential for the Enterprise that even EMC have taken note with the recent launch of their Isilon scale-out NAS that incorporates Hadoop's Distributed File System. This could just be the beginning for Hadoop as the big vendors start to also give their seal of approval. 


So while there were a number of other technologies, products and vendors I feel are going to cause some waves this year such as HDS' HAM, Tintri, EMC's VFCache, IBM's SVC 6.2 support for VAAI and of course VMware's eventual move into PaaS with the acquisition of SpringSource, I'm putting my neck on the line with these three being a guarantee; VCE's Vblock FastPath, Virtual Instruments' SAN Availability Probe and the Open Source Hadoop. Either way 2012 looks to be another great year for technology and Storage innovation in particular.