VCE's Vblock: Simplifying The Private Cloud Strategy


Today we will be talking about VCE’s cloud infrastructure product, the Vblock. Gartner’s recent study that through next year 60% of enterprises will embrace some form of cloud adoption, has enlightened the competitive cloud vendor market. But at the same time, does the cloud industry need to be driven by vendor competition or vendor collaboration? Archie Hendryx of VCE Technology Solutions discusses this very matter.

EM360°: Could you tell us about VCE and why cloud has played a big part in your company’s solutions?

Archie: VCE is a unique start up company formed via joint investments from EMC, Cisco, VMware and Intel that has been operating for just over three years. Its focus is solely on building the world's most advanced converged infrastructure, the Vblock. The Vblock is a pretested, prevalidated and preconfigured and more importantly pre-integrated infrastructure solution of storage, compute, networking and hypervisor; so in other words it ships out as a single SKU and product to the customer.

Personally I like to equate VCE as a revolutionary that has changed the way we view infrastructure as it’s manufacturing and selling infrastructure as a product much in the way like you buy a car such as an Audi. When you buy an Audi you may have different components from different vendors that make up that car but what the end user is purchasing is a single product. Similarly with the Vblock while we may use different components from our investors Cisco, EMC, VMware and Intel the end user is acquiring a product.  Because it’s a standardized product, the Vblock models are exactly the same regardless of geographical location, which completely radicalizes and simplifies the customer experience of infrastructure and consequently mitigates the typical risk associated with it.

As for how the cloud has played a big part in VCE’s success, one of the major criticisms of private clouds is that the end user still has to build, manage and maintain the infrastructure to the extent that they are continuing the ‘keeping the lights on’ approach of IT. Ultimately this lacks the economic benefit that makes cloud computing such an intriguing concept. Hence what we and our customers quickly realized is that a private cloud’s success ultimately depends on the stability, reliability, scalability and performance of its infrastructure. By going the Vblock route our customers immediately attain that stability, reliability, scalability and performance and consequently accelerate their private cloud initiatives. For example with support issues, VCE alone are the owner of the ticket because the Vblock is their product. Once the Vblock has been shipped out problems that might potentially be faced by a customer in Glasgow can easily be tested on a like-for-like standard Vblock in our labs. This rapidly resolves performance issues or trouble tickets.

The other distinctive feature of the Vblock is its accelerated deployment. We ship to the customer a ready assembled logically configured product and solution in only 30-45 working days, from procurement to production. This has an immediate effect in terms of the reduction in cost of ownership, especially when the businesses demand that instant platform for their new projects.

EM360°: Your latest cloud infrastructure solution sees your components from the Vblock, integrating with VMware’s new cloud solutions system. Can you tell me why industry collaboration is seen to be prominent in today's market?

Archie: What I think has driven this is a change in mindset of customers which has been initiated by the concept of cloud computing. Customers are reassessing the way they procure IT and they want a simplified and accelerated experience that doesn't require having to go to multiple vendors and solutions. I think vendors that are still only focused on storage or servers and have not looked at expanding their offerings via alliances or acquisitions are either going to fold or be swallowed up by the big fishes as they look to add to their portfolios. This is one of the reasons why the latest announcement from VMware and their vCloud suite is so exciting and of course VCE’s support and integration for it.

If VCE and the Vblock are responsible for accelerating your journey to the private cloud you could say that adding this vCloud suite would pretty much give it a major turbo boost.


EM360°: Are copyright factors, or other vendors sussing out each other’s strengths and weaknesses, a problem when you encounter a project like this?

Archie: That's a really interesting question and certainly I have experienced that in previous roles, especially when I was with initiatives such as Storage Networking Industry Association (SNIA) when they had SMI-S compliancy. We were always promised that SMI-S compliancy would allow us to have the utopia of a single pane of glass for heterogeneous storage arrays, regardless of whether the storage array was from HDS, HP or EMC. Sadly this was never the case. As none of the vendors opened up fully and you only ended up with around 60% functionality, which ultimately meant that you went back to the native tools and multiple management panes of glass that you had anyway. You could not really blame the vendors as it would be naive to think that one vendor would allow its competitor to dissect their micro-code. This mindset is not going to change. So that is why you will see vendors deciding to procure their own server companies or storage vendors to provide this end-to-end stack.

At VCE we are in a very unique position where our investors are not competing with each other, and for us they are ultimately the component providers to our product. We don't necessarily support or include all of our investors’ products or portfolios as components, only those we feel really best integrate with our single end user product. Once we have our specific components defined from our investors based on our standards we then pre-integrate and manufacture our product as a comprehensive solution. While our competitors and even our investors have such a large portfolio of products and offerings, VCE only do Vblocks and hence only focus on improving and optimizing Vblocks, enabling us to do things which others in the industry have only dreamed of, and this will be announced very soon.


EM360°: Today's enterprise market is obviously rather confused and what some other analysts are also thinking.  I don’t think some companies know what they want for their departments, whether to embrace public, open, private or a bit of both — hybrid functions. A lot of vendors are doing their own spin on cloud, particularly the niche players. Is the industry doing enough to simplify the product offering?

Archie: In a nutshell no. There is still a lot of confusion out there and smoke screen marketing from various vendors and this hasn't helped the end user decide or make the distinction between the various offerings and what is best for them. What we have found most recently with a lot of our enterprise clients is that they initially look at us as part of a storage server or datacenter refresh. While they may have some cloud initiatives, they really have little or no idea on how to achieve them, certainly in terms of the traditional model of IT procurement and deployment.

Once they understand the concept of the Vblock and how VCE can provide them a productized, risk free infrastructure we immediately see them come to the realisation of how this could be aligned to a Private Cloud model that in turn could develop to a Hybrid cloud. Once the customer realizes how agile and quick the deployment of their infrastructure could be with a Vblock , we nearly always find them talking and feeling freer to think higher up the stack with strategic discussions and plans on how they can deploy a management and orchestration solution and service portal. Ultimately if you want people to really understand the Cloud and what’s best for them you’ve got to show them how you take away the risk from their traditional IT and infrastructure challenges.


EM360°: Have we seen innovation thrive in the cloud infrastructure management market, and what kinds of developments and developments have really caught your eye today?

Archie: There are a lot great products and suites out there. Every day we are seeing improvements in the look and the feel of such products as they come closer to providing that public cloud experience to the private cloud. I think the challenge of all of these solutions up to now is that they have to be integrating with all of the components of the infrastructure as separate entities, especially when it comes to designing and deploying orchestration. Without trying to reveal too much what VCE will be bringing out, I can certainly say that it will completely revolutionize and simplify this, where the product will now be managed, monitored and orchestrated as exactly that, a single Vblock product. When this development comes it will really excite many and completely transform the private cloud infrastructure model going forward.

EM360°: Are there any final thoughts you would like to leave our readers with as to how the cloud infrastructure market will play out in the future, what kind of systems we could be using and how enterprises should look to plan ahead?

Archie: The industry is at an inflection point. The approach to IT is changing and is affecting the way customers and vendors approach and procure infrastructure, specifically with regards to the mission critical applications that they ultimately depend on. This is going to lead to more converged infrastructure offerings that are eventually going to pretty much get to the point where VCE are, which is a standardized product offering, or as we like to call it an x86 mainframe. One CTO of a customer recently  said to me that if I do not look at purchasing the components of the power and cooling of my datacenter, why should I do that with my infrastructure? That kind of summed it up for me because there is going to come a time when people will look back at the way open systems IT was purchased and deployed as ludicrous as someone today buying all of the components of a laptop, putting it all together, somehow expecting it to work perfectly and then be supported seamlessly by all of the vendors of the components.

To take that laptop analogy further what we will eventually see with infrastructure viewed and built as a product, is a new way to manage, monitor and update it as a product. For example when you update your laptop you are automatically notified of the patches and it’s a single click of the button for the single product. You don’t receive an update for your keyboard, followed by an update for your screen only for a week later to be sent another update for your CD-ROM. Concurrently when it comes to support you don’t log a call with the manufacturer of the CD-ROM component of your laptop you go directly to the manufacturer of the product. Imagine that same experience with your Cloud infrastructure where it alerts you of a single seamless update for the whole product? Where it has a true single management pane and presents itself to the end user as a single entity? Imagine how that would simplify the configuration, orchestration and management of the Private Cloud. That’s where the future lies and to be honest it might not be that far away.

Excerpt from interview & live podcast with Enterprise Management 360 Magazine

SAP HANA Aims to Make Oracle Stranglehold a Distant Memory


When the character Maverick from the movie Top Gun exclaimed, “I feel the need, the need for speed”, you’d be forgiven for mistaking it for a sound bite from a CIO discussing their transactional databases. Whether it’s a financial organization predicting share prices, a bank knowing whether it can approve a loan or a marketing organisation reaching consumers with a compelling promotional offer, the need to access, store, process and analyze data as quickly as possible is an imperative for any business looking to gain a competitive edge. Hence when in 2011, SAP announced their new in-memory platform HANA for enterprise applications everyone took note as they coined the advantage of real-time analytics. SAP HANA promised to not just make databases dramatically faster like traditional business warehouse accelerator systems but instead speed up the front end, enabling companies to run arbitrary, complex queries on billions of records in a matter of seconds as opposed to hours. The vendors of old legacy traditional databases were facing a major challenge, most notably the king of them all…Oracle.

The Birth and Emergence of Big Data
Back in the days of mainframe, you’d find the application and transactional data of reporting databases physically stored in the same system. This was due to applications, operating systems and databases being designed to maximize their hardware resources, which consequently meant you couldn’t process transactions and reports simultaneously. The bottleneck here was cost, in that if you wanted to scale you needed another mainframe.

After the advent of client servers where applications could run on a centralized database server via multiple and cost effective servers, scalability was achieved by simply adding additional application servers. Regardless, of this a new bottleneck was quickly established with systems relying on a single database server and requests from ever increasing application servers that ended up causing I/O stagnation. This problem became exasperated with OLTP (online transaction processing), where report creation required the system to concurrently read multiple tables in the database. Added to this servers and processors kept getting faster while disks (despite the emergence of SSD) were quickly becoming the bottleneck to automated processes that were producing large amounts of data that concurrently resulted in more report requests.

The net effect was a downward spiral where the increase of users requiring an increase of reports from the databases meant an increase in huge amounts of data being requested from disks that simply weren’t up to the job. When you then factored in the data proliferation of external users caused by the Internet and pressure inducing laws such as Sarbanes-Oxley, the demand to analyze even more data even quicker has reached fever point. With data and user volumes increasing by a factor of thousands compared to the I/O capability of databases, the transaction-based industry faced a challenge that required a dramatic shift and change.  Cue the 2011 emergence of SAP’s HANA.

Real-Time In Memory Platform Presents a Groundbreaking Approach
One of the major advantages of SAP HANA’s ability to run in real time is that it offers a non-requirement for data redundancy as it’s built to run as a single database. With clusters of affordable and scalable servers, transactional and analytical data are run on the same database, hence eliminating different types of databases for different application needs. Oracle on the other hand has built an empire on exactly the opposite.



Oracle has thrived on a model where generally companies start with a simple database that’s utilized for checking sales orders and ensuring product delivery to customers but as the business grows they need more databases with different and more demanding functions. Functions such as managing customer relationships, complex reporting and analysis drives a need for new databases that are separate from the actual business requiring data to be moved from one system to another. Eventually you have a sprawl of databases as existing ones are unable to handle the workloads making it almost impossible to track data movements yet alone attain real time updates. So while the Oracle marketing machine is also pitching the benefits of in-memory via its Exalytics appliance and in-memory database, TimesTen, Oracle are certainly in no rush to break this traditional model of database sprawl and the money-spinning licenses that come with it.

Looking closely at the Oracle Exalytics / TimesTen package, despite the hype, it merely is just an add-on product meaning that an end user will still need a license for the transactional database, another license for the data warehouse database and yet another license for TimesTen for Oracle Exalytics.

Moreover, the Oracle bolt-on approach serves to sell more of their hardware commodity and in some ways perversely justify their acquisition of SUN Microsystems, all at the expense of the customer. Due to the Exalytics approach continuing the traditional requirement for transactional data to be duplicated from the application to the warehouse and once again to Exalytics, the end user not only ends up with three copies of the data, they also have to have three levels of storage and servers. In contrast SAP HANA is designed to be a single database that runs both transactional applications and Business Warehouse deployments. Not only does SAP HANA’s one copy of data replace the two or three required for Oracle it also eliminates the need for materialized views, redundant aggregates and indexes leaving a significantly reduced data footprint.


Comparing HANA to Oracle’s TimesTen and Exalytics
As expected Oracle have already initiated their FUD team with bogus claims and untruths against HANA as well as even pushing their TimesTen as a like for like comparison. Where this is hugely flawed is that they fail to acknowledge or admit that SAP HANA is a completely groundbreaking design as opposed to a bolt-on approach.  With SAP HANA data is completely managed and accessed in RAM consequently doing away with the requirement of MOLAP, multiple indexes and other tuning features that Oracle pride themselves on.

Furthermore, despite the Oracle FUD, SAP HANA does indeed handle both unstructured and structured data, as well as utilise parallel queries for scaling out across server nodes. In this instance Oracle are trying hard to create the most confusion and subsequently detract the market from realizing that the TimesTen with Exalytics package still can’t scale out beyond the 1TB RAM limit unlike SAP HANA where each container can store up to 500TB of data all executable at high speed.

With an aggressive TCO and ROI model compared to a traditional Oracle deployment, SAP HANA also proves a lot more cost effective. With pricing based on an incremental of 64GB RAM and the total amount of data held in memory, licenses are fully inclusive of production and test/development requirements as well as the necessary tools.


SAP HANA’s embracing of VMware
Furthermore with Oracle’s belligerent stance towards VMware and the cost savings it brings to end users, SAP on the other hand has embraced it.  The recent announcement that SAP HANA is supporting VMware vSphere will provide them a vast competitive advantage, as it will enable customers to provision instances of SAP HANA in minutes as VM templates, as well as gain benefits such as Dynamic Resource Scheduling and vSphere vMotion. By virtualizing SAP HANA with VMware, end users can quickly have several smaller HANA instances all sharing a single physical server leading to better utilization of existing resources. With the promise of certified preconfigured and optimised converged infrastructures such as the Vblock around the corner, SAP HANA appliances could be shipped with vSphere 5 and SAP HANA pre-installed within days, enabling rapid deployment for businesses.

The Business Benefits of Real-Time
With business and transactions being done in real time, SAP HANA ensures that the data and the analytics that come with them are also in real time. The process of manually polling data from multiple systems and sorting them through are inadequate in a time when businesses are facing unpredictable economic conditions and volatile demand and complex supply chains. The need is for real time metrics that are aligned to supply and demand where a retailers' shelves can accurately and immediately be stocked eliminating unnecessary inventory costs, lost sales opportunities and failed product launches. Being able to instantly analyze data at any level of granularity enables a business to quickly respond to these market insights and take decisive actions such as transferring inventory between distribution centers based on expected sales or altering the prices of promotions based on customer demand. Instead of waiting for processes that take hours, days or even weeks, SAP HANA’s real time capabilities enable businesses to react in real time to incidents.

Ultimately SAP HANA is a revolutionary step forward that will empower organizations to focus more on the business and less on the infrastructure that supports them. With the promise of new applications being built by SAP to support real time decision making as well as being able to run existing applications, SAP HANA presents the opportunity to not only transform a business but also the underlying technology that supports it. 




VCE Vblock to be showcased at the Gartner Data Center Summit


This week I'll be at the Gartner Data Center Summit with VCE at the Park Plaza, Westminster in London, 27-28 November with VCE being a Silver Sponsor of the event.

As usual the VCE stand will be bustling with activity, as we will be featuring the Vblock™ System, exciting and engaging demos and plenty of opportunities to win great prizes.


DEMO TOPICS WILL INCLUDE:
                        Data Protection
                        Management & Orchestration
                        VDI/Desktop Virtualization
                        And much more...

Those that will wear their VCE “BACK OFF” gear, have the opportunity to potentially be spotted on the show floor and win a prize on the spot. The prize patrol will be out and about looking for your most creative display of “BACK OFF” gear.

If you've not registered for the event, you can do so here:


Look forward to seeing you there!

SplitRXMode – Taking VMware Multicasting to the Next Level

With every new version of vSphere, you’re almost guaranteed an abundance of new features aimed at not only improving previous functionality but also making a VM admin’s life easier. Occasionally though, amongst the copious list of new options there’s always the odd feature which gets overlooked, forgotten or quite simply ignored. One such feature is the SplitRXmode that to my surprise few people knew of when I recommended it for a customer this week. So what better subject to next evangelize and blog about?

Before discussing SplitRXmode, a quick recap on some networking basics and how packet forwarding is done on an IP network.

First there’s the method most folks are familiar with which is Unicast. Unicast transmission sends messages to a single unique network destination identified by a unique IP address, which enables a straightforward onetoone packet delivery. The Broadcast method on the other hand transmits a packet to every device on the network that is within the broadcast domain.

The Unicast method is not suitable for information that needs to be simultaneously sent to multiple recipients

Finally there’s the multicasting method where packet delivery is sent to a group of destinations denoted by a multicast IP address. Multicasting is typically used in applications that have a requirement for simultaneously sending information to multiple destinations such as distance learning, financial stock exchanges, video conferencing and digital video libraries. Multicast sends only one copy of the information along the network, whereby any duplication is at a point close to the recipients, consequently minimizing network bandwidth requirements.

The Multicast method sends only one copy minimizing bandwidth requirements

For multicast the Internet Group Management Protocol (IGMP) is utilized in order for membership of the multicast group to be established and coordinated. This leads to single copies of information being sent to the multicast sources over the network. Hence it’s the network that takes responsibility for replicating and forwarding the information to multiple recipients. By operating between the client and a local multicast router, IGMP utilises layer 2 switches with IGMP snooping and consequently derives the information regarding IGMP transactions. By being between the local and remote multicast routers, Multicast protocols such as PIM are then used to direct the traffic from the multicast server to the many multicast clients.

A typical IGMP architecture layout

In the context of VMware and virtual switches there’s no need for the vSwitches to perform IGMP snooping in order to recognise which VMs have IP multicast enabled. This is due to the ESX server having authoritative knowledge of the vNICs, so whenever a VM’s vNIC is configured for multicast the vSwitch automatically learns the multicast Ethernet group addresses associated with the VM. With the VMs using IGMP to join and leave multicast groups, the multicast routers send periodic membership queries while the ESX server allows these to pass through to the VMs. The VMs that have multicast subscriptions will in turn respond to the multicast router with their subscribed groups via IGMP membership reports. IGMP snooping in this case is done by the usual physical Layer 2 switches in the network so that they can learn which interfaces require forwarding of multicast group traffic. So when the vSwitch receives multicast traffic, it forwards copies of the traffic to the subscribed VMs in a similar way to Unicast i.e. based on destination MAC addresses. With the responsibility of tracking which vNIC is associated with which multicast group lying with the vSwitch, packets are only delivered to the relevant VMs.

Multicasting in a VMware context prior to SplitRXMode

While this method worked fine for some multicast applications this still wasn’t sufficient enough for the more demanding multicast applications and hence stalled their virtualisation. The reason being that in this case VMs would process the packet replication in a single shared context which ultimately led to constraints. This is because when there was a high VM to ESX ratio there was a consequent high packet rate that often caused large packet losses and bottlenecks. So with the release of vSphere 5, the new splitRXMode was released to not only compensate for this problem but also enable the virtualisation of demanding multicast applications. 

With SplitRXMode, the received packets are now split allowing them to be processed in multiple and separate contexts. This is achieved by having the packet replication conducted by the hypervisor instead. This is enabled by having multiple receivers on the same ESX server that consequently eliminates the requirement for a physical network to transfer multiple copies of the same packet. With the only caveat being that you require a VMXNET3 virtual NIC, SplitRXmode now uses multiple physical CPUs to process the network packets received in a single network queue. This feature can obviously improve network performance for certain workloads. Instead of a shared network queue, SplitRXmode enables you to specify which vNICs process the packets in a separate context, which consequently improves throughput and maximum packet rates for multicast workloads. While there may be some concerns that this could have significant CPU overhead, those that are running with Intel’s powerful new E5 processors should have little or no concern.

So if you’re considering multicast workloads, which have multiple and simultaneous network connections for your VMware environment e.g. (multiple VMs on the same ESX server that will receive multicast traffic from the same source), then take a closer look at SplitRXmode. If not, you might just like everybody else I’ve spoken to, completely forget about it.

CIO Success Simplified with Vblock


Of the many CIOs that I have had the pleasure to either work for or discuss with, one of the main concerns that constantly resonate is that of job longevity. When on average the job longevity for a CIO is between only 4-5 years and with trends showing that this is likely to shorten, it's no surprise that the role of a CIO requires instant success in minimal time and typically with minimal budget. Nearly every CEO’s mandate to a CIO is for IT to be better, faster and cheaper.

With this challenge the three steps to success for any CIO are plain and obvious. They are:

1) Eliminate risk
2) Improve Cycle Times
3) Reduce Cost 

While these three steps may incorporate subsidiary aspects such as demonstrating how IT best serves the business, building technological confidence to the business and making IT more effective etc. they eventually all fall under one of the three steps mentioned above.

Step 1: Eliminate Risk

Firstly by eliminating risk from your IT environment you immediately address the business concerns of:

-       The revenue impact of downtime
-       The revenue impact of performance slowdowns
-       The impact to the business’ brand value

Step 2: Improve Cycle Times

With a common business perception that legacy IT is too slow to deliver, improved cycle times are an imperative. This requires a solution that can accelerate the following and of course risk free:

-       Virtualisation and consolidation
-       Refresh projects
-       New Application and Service Roll outs
-       Private Cloud initiatives

Step 3: Reduce Cost

The last and most obvious one also presents the biggest challenge especially as customarily the last thing a new CIO can do is ask for a large investment to implement their new IT strategy. The business will quickly recognize a CIO’s success if they can prove that during their tenure they reduced CapEx and OpEx as well as Total Cost of Ownership.


CIOs shouldn't be concerned with buying tech from different silos & vendors but instead acquiring solutions that solve business problems

So it’s at this point imperative to remember that a CIO should not be concerned with buying technology from different silos and vendors but instead acquiring solutions that solve business problems. Long gone are the days when it was acceptable for a CIO to proudly boast the magnitude of their data centres and the large technology growth they had accumulated in an attempt to ensure everything was fully redundant. Instead the key drivers are for simplification, standardization & consolidation. This is where the concept of VCE's Vblock is key to a CIO’s success.

Infrastructure more often than not doesn’t carry the same sassiness or prominence to the business as a key application such as SAP but infrastructure is in essence the heart and soul of a business – if the server or storage goes down, the application won’t work which ultimately means you cannot ship and sell your product, hence why the three steps to CIO success are linked to a successful infrastructure.

How to Eliminate Risk:

An integrated stack should entail a robust disaster recovery and business continuity solution that can not only be tested and proven but also implemented and run with minimum complication.

This should also incorporate the de-risking of application migrations from physical to virtual platforms and more specifically key applications that the business depends on. 

Moreover this means a de-risked maintenance and operational procedure for the IT environment that is pretested, prevalidated and predictable and consequently eliminates any unplanned downtime.

In the past eliminating risk in this way has resulted in countless testing and validation procedures where every minute spent testing is a minute spent not growing the business. A true converged infrastructure can immediately resolve this.


How to improve Cycle Time:

Delivering a predefined, pre-integrated stack or in essence a plug and play data centre that’s delivered and built fit for purpose in typically only 30 days can quickly achieve this by reducing typical infrastructure delivery times by three months. Having proven infrastructure in minimal time allows the application owners to roll out new services at a fraction of the time and consequently cost.

How to Reduce Cost:

The key to this is to link any proposed investment to a tangible ROI that spans at least three years. Where most vendors have made the mistake of determining ROI based on virtualizing a total physical infrastructure this rarely works as most organisations have already virtualised to some extent. Instead an incremental value needs to be formulated that is linked to the virtualisation of key business critical applications.

Additionally with an integrated solution across the stack there’s no need to manage multiple components of an infrastructure and consequently multiple failure points that preoccupy multiple silos. This encompasses a changing of the mindset of technology being a break/fix, reactive organisation where heroes are rewarded for extinguishing fires. Instead a proactive and preventive methodology that has an “always on” culture will be adopted.
Vblock: Infrastructure delivered, deployed & optimized with minimum risk

By streamlining the workforce to do more with less in correlation with application teams, OpEx cost savings can quickly be achieved by redeploying money from the back end infrastructure to front office, revenue enhancing business value and productivity.

To conclude, technology’s protocol is to enable the business. Ensuring success in the three aforementioned steps enables a CIO to quickly enable the business …..and it may also allow them to stay in their job that little bit longer.