CIOs ask: "Who are the ViB?"



I've recently been inundated with a number of sporadic emails from a certain account named ViB. While I initially thought it was some kind of hoax I started to realise that this was a serious account as I was being sent exclusive information and industry tidbits specifically around how VCE's Vblock was enabling significant OPEX savings for customers as well as empowering CIOs to rapidly enable business transformation.

Amongst these interesting insights I was also sent a number of photos of cars with rather interesting number plates - see below:

ViB - vehicle 1
ViB - vehicle 2
ViB - vehicle 3

While none of this was really making any sense, it was this morning when I received a link to a new 40 second clip (see below) from the ViB where I was assured things would become clearer.

If anything what does seem to be clearer is that the ViB is most likely an acronym for "vArchitect in Black". Apart from this the 40 second teaser trailer seems to throw up more questions than answers!

- Are the ViB a new specialist team within VCE?
- Is this teaser trailer a precursor to a new documentary or feature film from VCE?
- Is this just a marketing stunt or hoax?
- Is this the precursor to another new product launch from VCE?
- Does this in fact have anything to do with VCE?
- Why are CIOs looking to VCE's Vblock as a solution to their problems and more specifically the ViB?

I'll leave you to decide by watching the clip yourself below and of course update you if and when I receive further information and clarification.






IT Infrastructure Debate in the Sunday Telegraph Newspaper

I was recently asked to take part in a debate for the Sunday Telegraph newspaper on the subject of "How will IT infrastructure evolve?". In case you missed it, it's now available online at:

...and for your convenience below. 
N.B. I have no idea who that picture is of, he doesn't even have my white hair or glasses (-;

The debate: How will IT infrastructure evolve?


I expect to see continued exponential growth in the use and emulation of internet data centres; the likes of Amazon, Rackspace, Memset or Google that have tightly packed racks, operating at very high efficiencies, running cloud computing services.The key difference in approach will be the widespread adoption of commodity hardware to deliver enterprise quality services by moving intelligence into software. Examples include distributed applications running on virtual machines, many cheap nodes crunching big data and more object storage and non-relational databases.Storage is especially exciting. I believe the days of “big iron” vendors, RAID5/6 and tape are numbered. Our enormously resilient, distributed storage system using commodity tin costs us less than £20 a terabyte per month. By layering media types, such as SATA disks, SSDs and DRAM, and mobilising tools including Automated Intelligence’s Datapoint, you can have your cake – cheap storage with low latency for critical data – and eat it.
Kate Craig-Wood
Managing director, Memset








A CIO’s infrastructure decisions will focus more on leading the business rather than simply aligning with it. Technologies such as unified communications, virtualisation and cloud computing will be further adopted to gain a competitive advantage while security and risk concerns have to be mitigated.  This requires an agile and flexible IT model that is shackled by traditional infrastructure and has left IT departments struggling with daily firefighting exercises. To ensure success, IT admin will need a new breed of infrastructure that enables them to focus on delivering, optimising and managing the application while not needing to worry about the infrastructure that supports them. Consequently the benefits offered by standardised, pre-integrated and pre-validated converged infrastructures will gain even more traction in the industry. This will not only present a dramatic paradigm shift in IT infrastructure but also the way IT is approached, managed, deployed and viewed by the application owners and business it supports. 
Archie Hendryx
vArchitect  EMEA, VCE

There can be no questioning that big data is a disruptive market force. The massive influx of data that’s impacting upon organisations of all shapes and sizes means that traditional IT infrastructure is becoming increasingly obsolete. Big data is the intensive analysis of large, complex, disparate or unstructured data sets to get actionable results in real-time. For many, to do this with an on-premise infrastructure will almost certainly lead to failure as few boast the necessary servers or computer clusters. Simply put, to execute big data analytics you need the suitable infrastructure to underpin it. Organisations need a massive amount of computing power to take all their data, wherever it’s stored, and analyse it for valuable insights. For most people at least, this leaves cloud computing as the most attractive option.  Being able to gain access to potentially limitless scalability through Infrastructure-as-a-Service via the cloud makes big data a possibility for one and all. As such, the evolution of IT infrastructure seems likely to be moving towards outsourcing.
Dominic Pollard
Editor, Nimbus Ninety 

Why it's Pivotal EMC & VMware refuse to PaaS up the opportunity


If you were to ask EMC or VMware whom they consider their major threat and competition you’d be easily forgiven for being mistaken to think it was NetApp, HP or offerings such as Hyper-V. With many terming us to now be in the third era of corporate computing, with mainframe and the client/server being the first two, the current cloud era has undoubtedly been spearheaded by the likes of Google, Amazon and Facebook. It is here where EMC and VMware face their biggest challenge of remaining relevant and cutting edge in a market that demands automation, simplicity and speed of deployment. Despite major marketing campaigns of “Big Data” and “Clouds” that have seen airports littered with exorbitant amounts of posters and adverts, as well as numerous acquisitions of various companies that have extended already huge product portfolios, both EMC and VMware have struggled to release themselves from the shackles of being deemed just a Storage and Hypervisor company. So in light of this it’s no surprise to see both companies spin off a new and independent venture that will address this very challenge, namely the Pivotal Initiative.

With a promise of $400 million in investments and a 69 / 31 % split in ownership between EMC and VMware respectively, the Pivotal Initiative will be headed by none other than VMware’s ex-CEO Paul Maritz. At the time his stepping down from that position raised a few eyebrows and questions as to whether he was being demoted, prepped for early retirement or was just being pushed to make way for VMware’s current CEO, Pat Gelsinger. In hindsight one could easily see this now as a move that maybe Maritz himself initiated from his own recognition that VMware as a company was failing to transition yet alone be recognized as a PaaS organisation.

Maritz like most in the industry would have recognised that with ever increasing data sets and ever increasing scale, the need for automation, rapid application development and deployment is quickly breaking beyond the capabilities of traditional man managed infrastructures that have previously been offered by EMC and VMware.  Moreover both VMware and EMC know it’s all about applications and specifically big data applications. For VMware and EMC to succeed in having the de facto platform of the IT industry, it’s key they win the war to host these new and integral applications. To address this EMC and VMware went about acquiring just about every relevant start up or product that could possibly address this challenge from GemStone, GreenPlum to SpringSource. Despite this huge purchasing spree and VMware’s push to develop vFabric and create the PaaS initiative Cloud Foundry both EMC and VMware have struggled to gain market recognition as true Cloud and PaaS players.

One of the key aspects challenging EMC and VMware’s recognition as a Cloud and PaaS offering has ironically been the very thing they drove to try and solve it i.e. the incredible rate of acquisitions and consequent increase in product portfolios. With sales and presales teams that had been accustomed for years to successfully pitching and selling storage arrays and hypervisor licenses, the demand on them was now to understand new and alien concepts of Big Data analytics, PaaS, application development, SaaS etc and also address a customer base they were not accustomed to. Now by having Maritz head up a brand new and independent company that can essentially take the appropriate products from those portfolios, the opportunity is to establish brand new and focused sales, technical and post sales teams that understand applications, big data etc. as well as have the right level of existing relations within their potential client base.

So what is the Pivotal Initiative actually bringing new to the table in terms of products? Well not much actually. In fact what it does bring is a much needed cohesion between what have now been a multitude of disparate acquisitions and products that have failed to gain the market share their technical and business benefits certainly deserve.

Firstly there’s the platform that will be based on EMC’s Greenplum appliance integrated with Pivotal HD, the data querying system that works with Hadoop. The Greenplum appliance is based on the open source PostgreSQL, which is a full ANSI-standard relational database system and has performance benchmarks with Hadoop’s parallel system that are already impressive. With the soon to be released Pivotal HD product from the Pivotal Labs group, the aim is to conduct even more queries against even larger data sets.

Pivotal Initiative: The products may look familiar but only this time there's cohesion and focus 
From a VMware perspective, there’s the inclusion of Gemfire to serve as the caching layer with its capability of quickly ingesting events via its in-memory data management system. Then there’s Cetas that provides rapid analytics atop the Hadoop platform and is designed for the elasticity of virtual resources with specific focus on not only vSphere but also Amazon Web Services. Additionally and most interestingly is the addition of the Cloud Foundry PaaS, which was initially built to run on VMware’s proprietary system. This time it comes with the promise that it will be an abstraction layer with application automation for cross clouds enabling Pivotal to be hosted on the likes of Amazon Web Services' EC2. Coupling this with SpringSource’s Java application development framework to enable integration with legacy data sources and applications and the Pivotal Labs’ ability for facilitating rapid coding, the objective is a focused approach and aim at the jugular of online and enterprise analytics.

The Pivotal Initiative will aim to deliver the market a data analysis platform capable of capturing large volumes of data, quickly addressing and querying it and then producing near real time answers that can be stored in a large scale-out storage system. It would be naïve to think this is an initiative aimed just at existing VMware customers. This is an attempt to not only enter but also become relevant in the software led infrastructure arena that competes with the likes of Amazon.

In essence the Pivotal Initiative is a brave yet necessary move from both EMC and VMware to embrace the challenge of change as the legacy of traditional infrastructure faces the daunting prospect of new software paradigms. Whether the Pivotal Initiative can be successful and achieve it’s $1bn rate in its projected five years depends on a number of factors. One thing is certain is that the first challenge to remaining relevant in the IT industry is to acknowledge and adapt to change. The masters behind the Pivotal Initiative have already achieved that.

Velocity is Key to Cloud Maturity


When you think Cloud, whether Private or Public, one of the key advantages that comes to mind is speed of deployment. All businesses crave the ability to simply go to a service portal, define their infrastructure requirements and immediately have a platform ready for their new application. Coupled with that you instantly have service level agreements that generally centre on uptime and availability. So for example, instead of being a law firm that spends most of its budget on an in house IT department and datacenter, the Cloud provides an unavoidable opportunity for businesses to instead procure infrastructure as a service and consequently focus on delivering their key applications. But while the understanding of Cloud Computing and its benefits have matured within the industry, so too has the understanding that maybe what’s currently being offered still isn’t good enough for their mission critical applications. The reality is that there is still a need for a more focused and refined understanding of what the service level agreements should be and ultimately a more concerted approach towards the applications. So while neologisms such as speed, agility and flexibility remain synonymous with Cloud Computing, its success and maturity ultimately depend upon a new focal point, namely velocity.

Velocity bears a distinction from speed in that it's not just a measure of how fast an object travels but also in what direction that object moves. For example in a Public Cloud whether that be Amazon, Azure or Google no one can dispute the speed. Through only the clicks of a button you have a ready-made server that can immediately be used for testing and development purposes. But while it may be quick to deploy, how optimised is it for your particular environment, business or application requirements? With only generic forms the specific customization to a particular workload or business requirement fails to be achieved as optimization is sacrificed for the sake of speed. Service levels based on uptime and availability are not an adequate measure or guarantee for the successful deployment of an application. For example it would be considered ludicrous to purchase a laptop from a service provider that merely stipulates a guarantee that it will remain powered on even though it performs atrociously.

In the Private Cloud or traditional IT example, while the speed to deployment is not as quick as that of a public cloud, there are other scenarios where speed is being witnessed yet failing to produce the results required for a maturing Cloud market. Multiple infrastructure silos will constantly be seen to be hurrying around, busily firefighting and maintaining “the keeping the lights on culture” all at rapid speed. Yet while the focus should be on the applications that need to be delivered, being caught in the quagmire of the underlying infrastructure persistently takes precedent with IT admin having to constantly deal with interoperability issues, firmware upgrades, patches and multi-management panes of numerous components. Moreover service offerings such as Gold, Silver, Bronze or Platinum are more often than not centered around infrastructure metrics such as number of vCPUs, Storage RAID type, Memory etc. instead of application response times that are predictable and scalable to the end user's stipulated demands.

For Cloud to embrace the concept of velocity the consequence would be a focused and rigorous approach that has a direction aimed solely at the successful deployment of applications that in turn enable the business to quickly generate revenue. All the pieces of the jigsaw that go into attaining that quick and focused approach would require a mentality of velocity being adopted comprehensively from each silo of the infrastructure team while concurrently working in cohesion with the application team to deliver value to the business. This approach would also entail a focused methodology to application optimization and consequently a service level that measured and targeted its success based on application performance as opposed to just uptime and availability.

Velocity leads to a comprehensive focus on the successful deployment and optimisation of the application
While some Cloud and service providers may claim that they already work in unison with a focus on applications, it is rarely the case behind the scenes as they too are caught in the challenge of traditional build it yourself IT. Indeed it’s well known that some Cloud hosting providers are duping their end users with pseudo service portals where only the impression of an automated procedure for deploying their infrastructure is actually provided. Instead service portals that actually only populate a PDF of the requirements which are then printed out and sent to an offshore admin who in turn provisions the VM as quickly as possible are much closer to the truth. Additionally it’s more than likely that your Private Cloud or service provider has a multi-tenant infrastructure with mixed workloads that sits behind the scenes as logical pools ready to be carved up for your future requirements. While this works for the majority of workloads and SMB applications, with more businesses looking to place more critical and demanding applications into their Private Cloud to attain the benefits of chargeback etc. they need an assurance of an application response time that is almost impossible to guarantee on a mixed workload infrastructure. As the Cloud market matures and the expectations that come with it with regards to application delivery and performance, such procedures and practices will only be suitable for certain markets and workloads.

So for velocity to take precedent within the Private Cloud, Cloud or even Infrastructure as a Service model and to fill this Cloud maturity void, infrastructure needs to be delivered with applications as their focal point. That consequently means a pre-integrated, pre-validated, pre-installed and application certified appliance that is standardized as a product and optimised to meet scalable demands and performance requirements. This is why the industry will soon start to see a new emergence of specialized systems specifically designed and built from inception for performance optimization of specific application workloads. By having applications pre-installed, certified and configured with both the application and infrastructure vendors working in cohesion, the ability for Private Cloud or service providers to predict, meet and propose application performance based service levels becomes a lot more feasible. Additionally such an approach would also be ideal for end users who just need a critical application rolled out immediately in house with minimum fuss and risk.

While there may be a number of such appliances or specialized systems that will emerge in the market for applications such as SAP HANA or Cisco Unified Communications the key is to ensure that they’re standardized as well as optimised. This entails a converged infrastructure that rolls out as a single product and consequently has a single matrix upgrade for all of its component patches and firmware upgrades that subsequently also correspond with the application. Additionally it encompasses a single support model that includes not only the infrastructure but also the application. This in turn not only eliminates vendor finger pointing and prolonged troubleshooting but also acts as an assurance that responsibility of the application’s performance is paramount regardless of the potential cause of the problem.


Driving the Velocity of Change within the industry: VCE's new SAP HANA Vblock specialized system 
The demand for key applications to be monitored, optimised and rolled out with speed and velocity will be faced by not only Service providers and Private Cloud deployments but also internal IT departments who are struggling with their day to day firefighting exercises. To ensure success, IT admin will need a new breed of infrastructure or specialized systems that enables them to focus on delivering, optimizing and managing the application and consequently not needing to worry about the infrastructure that supports them. This is where the new Vblock specialized systems being offered by VCE come into play. Unlike other companies with huge portfolios of products, VCE have a single focal point, namely Vblocks. By now adopting that same approach of velocity that was instilled for the production of standardized Vblock models, end users can now reap the same rewards with new specialized systems that are application specific. Herein lies the key to Cloud maturity and ultimately the successful deployment of mission critical applications.

For more information on VCE's new specialized Vblock Systems please visit:

The SANMAN nominated for Top Virtualization Blog of 2013

While pop stars have the Grammys, actors & actresses have the Oscars and Simon Cowell's latest manufactured projects have the Brits, techie geeks such as myself have the annual pleasure of voting for the Top Virtualization Blogs.

This year as I went on the site to cast my vote for my usual top ten, I was surprised and delighted to see the SANMAN up for nomination. To say I'm chuffed is putting it mildly and I wanted to take the opportunity to thank all of the readers that visit this site.

While I sometimes struggle to try and blog as much as I want in my spare time, I'm extremely appreciative of the many readers that visit and take the time out to read my posts. It is indeed you that keep me motivated to stay up in the late hours of the night writing!

So if you haven't already here's a chance to vote for your own personal favourites of 2013:
http://www.surveygizmo.com/s3/1165270/Top-vBlog-2013

VCE Set For Major Industry Announcement


It’s been nearly 11 months since I joined VCE as a vArchitect. In that short amount of time I’ve not only seen an incredible amount of development and change within the IT industry but also within the company I’m so excited to still be working for. 

The changes within the industry have been astounding as the awareness and market for Converged Infrastructure continues to grow at an unprecedented level. CIOs, IT Directors and CTOs are quickly realizing that they can achieve business objectives at minimum risk, in a quarter of the time and with more than 60% operational savings with a CI as opposed to the traditional build it yourself or reference architecture models they’ve been accustomed to. 

Analysts such as IDC and Gartner have also validated VCE’s customers’ savings and CI market leadership in their recent analysis. 

With imitation being the best of complements, my 11 months at VCE has also seen “competing” vendors launch their own quasi-versions of converged infrastructure from HP’s Cloud Matrix, Huawei’s FusionCube to IBM’s PureSystems and Dell’s Active. While they’ve certainly adopted all of the marketing messaging of VCE’s unique value proposition in reality they’re still a long way from the standardized, pre-integrated, pre-validated and pretested product offering of a Vblock. Indeed they are still more akin to the reference architecture offering of NetApp’s FlexPod. Until they adopt a standardized product based approach they will still struggle to produce a correct bill of materials in the same time VCE delivers and installs a production ready Vblock.

So while the ever changing industry and market plays catch up it’s even more exciting to see VCE prepare to launch their next phase of offerings that will undoubtedly propel them even further ahead of the competition and entrench it’s position as the market innovator and leader. 


While I’m unable to reveal anything prior to the deadline set for Thursday 21st February 4pm GMT (blogs will follow), I can confirm that VCE are bringing additions to their portfolio that will have the industry in a frenzy particularly in the management, orchestration and compliance space.

With industry luminaries and leaders such as VCE CEO, Praveen Akkiraju, Cisco CEO John Chambers, EMC CEO Joe Tucci and VMware CEO Pat Gelsinger speaking at the launch, this is one announcement not to be missed.

Registration can be done here at www.vce.com

VCE's Vblock: Simplifying The Private Cloud Strategy


Today we will be talking about VCE’s cloud infrastructure product, the Vblock. Gartner’s recent study that through next year 60% of enterprises will embrace some form of cloud adoption, has enlightened the competitive cloud vendor market. But at the same time, does the cloud industry need to be driven by vendor competition or vendor collaboration? Archie Hendryx of VCE Technology Solutions discusses this very matter.

EM360°: Could you tell us about VCE and why cloud has played a big part in your company’s solutions?

Archie: VCE is a unique start up company formed via joint investments from EMC, Cisco, VMware and Intel that has been operating for just over three years. Its focus is solely on building the world's most advanced converged infrastructure, the Vblock. The Vblock is a pretested, prevalidated and preconfigured and more importantly pre-integrated infrastructure solution of storage, compute, networking and hypervisor; so in other words it ships out as a single SKU and product to the customer.

Personally I like to equate VCE as a revolutionary that has changed the way we view infrastructure as it’s manufacturing and selling infrastructure as a product much in the way like you buy a car such as an Audi. When you buy an Audi you may have different components from different vendors that make up that car but what the end user is purchasing is a single product. Similarly with the Vblock while we may use different components from our investors Cisco, EMC, VMware and Intel the end user is acquiring a product.  Because it’s a standardized product, the Vblock models are exactly the same regardless of geographical location, which completely radicalizes and simplifies the customer experience of infrastructure and consequently mitigates the typical risk associated with it.

As for how the cloud has played a big part in VCE’s success, one of the major criticisms of private clouds is that the end user still has to build, manage and maintain the infrastructure to the extent that they are continuing the ‘keeping the lights on’ approach of IT. Ultimately this lacks the economic benefit that makes cloud computing such an intriguing concept. Hence what we and our customers quickly realized is that a private cloud’s success ultimately depends on the stability, reliability, scalability and performance of its infrastructure. By going the Vblock route our customers immediately attain that stability, reliability, scalability and performance and consequently accelerate their private cloud initiatives. For example with support issues, VCE alone are the owner of the ticket because the Vblock is their product. Once the Vblock has been shipped out problems that might potentially be faced by a customer in Glasgow can easily be tested on a like-for-like standard Vblock in our labs. This rapidly resolves performance issues or trouble tickets.

The other distinctive feature of the Vblock is its accelerated deployment. We ship to the customer a ready assembled logically configured product and solution in only 30-45 working days, from procurement to production. This has an immediate effect in terms of the reduction in cost of ownership, especially when the businesses demand that instant platform for their new projects.

EM360°: Your latest cloud infrastructure solution sees your components from the Vblock, integrating with VMware’s new cloud solutions system. Can you tell me why industry collaboration is seen to be prominent in today's market?

Archie: What I think has driven this is a change in mindset of customers which has been initiated by the concept of cloud computing. Customers are reassessing the way they procure IT and they want a simplified and accelerated experience that doesn't require having to go to multiple vendors and solutions. I think vendors that are still only focused on storage or servers and have not looked at expanding their offerings via alliances or acquisitions are either going to fold or be swallowed up by the big fishes as they look to add to their portfolios. This is one of the reasons why the latest announcement from VMware and their vCloud suite is so exciting and of course VCE’s support and integration for it.

If VCE and the Vblock are responsible for accelerating your journey to the private cloud you could say that adding this vCloud suite would pretty much give it a major turbo boost.


EM360°: Are copyright factors, or other vendors sussing out each other’s strengths and weaknesses, a problem when you encounter a project like this?

Archie: That's a really interesting question and certainly I have experienced that in previous roles, especially when I was with initiatives such as Storage Networking Industry Association (SNIA) when they had SMI-S compliancy. We were always promised that SMI-S compliancy would allow us to have the utopia of a single pane of glass for heterogeneous storage arrays, regardless of whether the storage array was from HDS, HP or EMC. Sadly this was never the case. As none of the vendors opened up fully and you only ended up with around 60% functionality, which ultimately meant that you went back to the native tools and multiple management panes of glass that you had anyway. You could not really blame the vendors as it would be naive to think that one vendor would allow its competitor to dissect their micro-code. This mindset is not going to change. So that is why you will see vendors deciding to procure their own server companies or storage vendors to provide this end-to-end stack.

At VCE we are in a very unique position where our investors are not competing with each other, and for us they are ultimately the component providers to our product. We don't necessarily support or include all of our investors’ products or portfolios as components, only those we feel really best integrate with our single end user product. Once we have our specific components defined from our investors based on our standards we then pre-integrate and manufacture our product as a comprehensive solution. While our competitors and even our investors have such a large portfolio of products and offerings, VCE only do Vblocks and hence only focus on improving and optimizing Vblocks, enabling us to do things which others in the industry have only dreamed of, and this will be announced very soon.


EM360°: Today's enterprise market is obviously rather confused and what some other analysts are also thinking.  I don’t think some companies know what they want for their departments, whether to embrace public, open, private or a bit of both — hybrid functions. A lot of vendors are doing their own spin on cloud, particularly the niche players. Is the industry doing enough to simplify the product offering?

Archie: In a nutshell no. There is still a lot of confusion out there and smoke screen marketing from various vendors and this hasn't helped the end user decide or make the distinction between the various offerings and what is best for them. What we have found most recently with a lot of our enterprise clients is that they initially look at us as part of a storage server or datacenter refresh. While they may have some cloud initiatives, they really have little or no idea on how to achieve them, certainly in terms of the traditional model of IT procurement and deployment.

Once they understand the concept of the Vblock and how VCE can provide them a productized, risk free infrastructure we immediately see them come to the realisation of how this could be aligned to a Private Cloud model that in turn could develop to a Hybrid cloud. Once the customer realizes how agile and quick the deployment of their infrastructure could be with a Vblock , we nearly always find them talking and feeling freer to think higher up the stack with strategic discussions and plans on how they can deploy a management and orchestration solution and service portal. Ultimately if you want people to really understand the Cloud and what’s best for them you’ve got to show them how you take away the risk from their traditional IT and infrastructure challenges.


EM360°: Have we seen innovation thrive in the cloud infrastructure management market, and what kinds of developments and developments have really caught your eye today?

Archie: There are a lot great products and suites out there. Every day we are seeing improvements in the look and the feel of such products as they come closer to providing that public cloud experience to the private cloud. I think the challenge of all of these solutions up to now is that they have to be integrating with all of the components of the infrastructure as separate entities, especially when it comes to designing and deploying orchestration. Without trying to reveal too much what VCE will be bringing out, I can certainly say that it will completely revolutionize and simplify this, where the product will now be managed, monitored and orchestrated as exactly that, a single Vblock product. When this development comes it will really excite many and completely transform the private cloud infrastructure model going forward.

EM360°: Are there any final thoughts you would like to leave our readers with as to how the cloud infrastructure market will play out in the future, what kind of systems we could be using and how enterprises should look to plan ahead?

Archie: The industry is at an inflection point. The approach to IT is changing and is affecting the way customers and vendors approach and procure infrastructure, specifically with regards to the mission critical applications that they ultimately depend on. This is going to lead to more converged infrastructure offerings that are eventually going to pretty much get to the point where VCE are, which is a standardized product offering, or as we like to call it an x86 mainframe. One CTO of a customer recently  said to me that if I do not look at purchasing the components of the power and cooling of my datacenter, why should I do that with my infrastructure? That kind of summed it up for me because there is going to come a time when people will look back at the way open systems IT was purchased and deployed as ludicrous as someone today buying all of the components of a laptop, putting it all together, somehow expecting it to work perfectly and then be supported seamlessly by all of the vendors of the components.

To take that laptop analogy further what we will eventually see with infrastructure viewed and built as a product, is a new way to manage, monitor and update it as a product. For example when you update your laptop you are automatically notified of the patches and it’s a single click of the button for the single product. You don’t receive an update for your keyboard, followed by an update for your screen only for a week later to be sent another update for your CD-ROM. Concurrently when it comes to support you don’t log a call with the manufacturer of the CD-ROM component of your laptop you go directly to the manufacturer of the product. Imagine that same experience with your Cloud infrastructure where it alerts you of a single seamless update for the whole product? Where it has a true single management pane and presents itself to the end user as a single entity? Imagine how that would simplify the configuration, orchestration and management of the Private Cloud. That’s where the future lies and to be honest it might not be that far away.

Excerpt from interview & live podcast with Enterprise Management 360 Magazine