Voting for the Top Virtualisation Blog of the Year - 2014

Incredibly it's that time of year again when voting commences for the Top Virtualization blog of the year and I've just been pinged a note that The SANMAN blog has been nominated again for voting! Last year's nomination was also a nice surprise as the blog ended up being a New Entry in the charts at 172 - sure it's not a One Direction hit single that went straight to number one but 172 is still a chart number Kajagoogoo would have been proud of (-;


Whether you decide to vote for The SANMAN or not it's still worth having a look at the other nominees and casting your vote for them as there are some great resources across the tech spectrum, including some faves of mine such as the Wikibon blog, TechHead and StorageIO.

So while I don't hold any hopes of breaking the Top 100, I did want to take the opportunity to thank all of the readers that visit this site and find it worthwhile. It's been another really busy year at VCE and trying to find time to write meaningful, insightful and useful articles can be a struggle. Despite this there's no greater motivation than knowing that people from across the world take time out to read my posts.

Thanks for your support and happy voting!

http://www.surveygizmo.com/s3/1553027/Top-VMware-virtualization-blogs-2014



Interview with CloudTech - Why virtualisation isn't enough in cloud computing

I was recently interviewed for an article with CloudTech around the topic of whether virtualisation in itself was enough for a successful cloud computing deployment. Below is an excerpt of the article. For the full article which also includes viewpoints from other analysts please follow the link:
While it is generally recognised that virtualisation is an important step in the move to cloud computing, as it enables efficient use of the underlying hardware and allows for true scalability,  for virtualisation in order to be truly valuable it really needs to understand the workloads that run on it and offer clear visibility of both the virtual and physical worlds.

On its own, virtualisation does not lend itself to creating sufficient visibility about the multiple applications and services running at any one time. For this reason a primitive automation system could cause a number of errors to occur, such as the spinning up of another virtual machine to offset the load on enterprise applications that are presumed to be overloaded.
Well that’s the argument that was presented by Karthikeyan Subramaniam in his Infoworld article last year, and his viewpoint is supported by experts at converged cloud vendor VCE.
“I agree absolutely because server virtualisation has created an unprecedented shift and transformation in the way datacentres are provisioned and managed”, affirms Archie Hendryx – VCE’s Principal vArchitect. He adds that, "server virtualisation has brought with it a new layer of abstraction and consequently a new challenge to monitor and optimise applications."
Hendryx has also experienced first hand how customers address this challenge "as a converged architecture enables customers to quickly embark on a virtualisation journey that mitigates risks and ensures that they increase their P to V ratio compared to standard deployments.”
In his view there's a need to develop new ways of monitoring provides end users more visibility concerning the complexities of their applications, their interdependencies and how they correlate with the virtualised infrastructure. “Our customers are now looking at how they can bring an end-to-end monitoring solution to their virtualised infrastructure and applications to their environments”, he says. In his experience this is because customers want their applications to have the same benefits of orchestration, automation, resource distribution and reclamation that they obtained with their hypervisor.
Virtual and physical correlations
Hendryx adds: “By having a hypervisor you would have several operating system (OS) instances and applications. So for visibility you would need to correlate what is occurring on the virtual machine and the underlying physical server, with what is happening with the numerous applications.” He therefore believes that the challenge is to try to understand the behaviour of an underlying hypervisor that has several applications running simultaneously on it. For example, if a memory issue were to arise relating to an operating system of a virtual machine, it would be possible to find that the application either has no memory left, or it might be constrained, yet the hypervisor might still present metrics that there is sufficient memory available.
Hendryx says these situations are quite common: “This is because the memory metrics – from a hypervisor perspective – are not reflective of the application as the hypervisor has no visibility into how its virtual machines are using their allocated memory.” The problem being that the hypervisor has no knowledge of whether the memory it allocated to a virtual machine is, for cache, paging or pooled memory. What it understands in actuality is that it has made provision for memory and this is why errors can often occur.
Complexities
This lack of inherent visibility and correlation between the hypervisor, the operating system and the applications that run them could cause another virtual machine to spin up. “This disparity occurs because setting up a complex group of applications is far more complicated than setting up a virtual machine”, says Hendryx. There is no point in cloning a virtual machine with an encapsulated virtual machine either; this approach just won’t work, and that’s because it will fail to address what he describes as “the complexity of multi-tiered applications and their dynamically changing workloads.”
It’s therefore a must to have some application monitoring in place that correlates with the metrics that are being constantly monitored by the hypervisor and the application interdependencies.
“The other error that commonly occurs is caused when the process associated with provisioning is flawed and not addressed”, he comments. When this occurs the automation of that process will remain unsound to the extent that further issues may arise. He adds that automation from a virtual machine level will fail to allocate its resources adequately to the key applications and this will have a negative impact on response times and throughput – leading to poor performance.
Possible solutions
According to Hendryx, VCE has ensured customers have visibility within a virtualised and converged cloud environment by deploying VMWare’s vCenter Operations Manager to monitor the Vblock’s resource utilisation. He adds that “VMware’s Hyperic and Infrastructure Navigator has provided them with the visibility of virtual machine to application mapping as well as application performance monitoring, to give them the necessary correlation between applications, operating system, virtual machine and server…” It also offers them the visibility that has been so lacking.
Archie Hendryx then concluded with best practices for virtualisation within a converged infrastructure:
1. If it’s successful and repeatable, then it’s worth standardising and automating because automation will enable you to make successful processes repeatable.
2.  Orchestrate it because even when a converged infrastructure is deployed there will still need to be changes that require rolling out; such as operating system updates, capacity changes, security events, load-balancing or application completions. These will all need to be placed in a certain order and you can automate the orchestration process.
3.  Simplify the front end by recognising that virtualisation has transformed your environment into a resource pool that end users should be able to request and provision for themselves and be consequently charged for. This may involve eliminating manual processes in favour of automated workflows, and simplification will enable a business to recognise the benefits of virtualisation.
4.  Manage and monitor: You can’t manage and monitor what you can’t see. For this reason VCE customers have an API that provides visibility and context to all of the individual components within a Vblock. They benefit from integration with VMware’s vCenter and vCenter Operations Manager and VCE’s API called Vision IOS. From these VCE’s customers gain visibility and the ability to immediately discover, identify and validate all of the components and firmware levels within the converged infrastructure as well as monitor its end-to-end health. This helps to eliminate any bottlenecks that might otherwise occur by allowing overly provisioned resources to be reclaimed.

Software Defined shouldn’t be about Infrastructure

The term “software defined” has taken many forms in recent months from Software Defined Datacenter (SDDC), Software Defined Infrastructure (SDI) to even component vendors adopting the tagline to exalt their own agenda with Software Defined Networking (SDN) and Software Defined Storage (SDS). Yet ironically the majority of the vendors adopting the tagline are also dealing with infrastructure product lines that a “software defined” approach is aiming to make irrelevant.

The emergence of the cloud illuminated to the industry that the procurement, design and deployment of the infrastructure components of network, storage and compute were a hindrance to application delivery. The inability for infrastructure components to not be quickly and successfully coordinated together as well as be automatically responsive to application needs has led many to question why traditional approaches to infrastructure are still being considered. In an attempt to safeguard themselves from this realisation, it’s no surprise that the infrastructure vendors have adopted the software defined terminology and consequently marketed themselves as such even though at the end of the day they are still selling what is quintessentially hardware.


From the networking and storage perspective software defined is about abstracting legacy hardware from multiple vendors via virtualization so that the management and configuration is done completely by software. Instead of managing individual components based on their vendor, via APIs these now common pools of network and storage can be quickly and easily managed with automation and orchestration tools. Ironically though this has already existed for some time with examples being HDS’ storage virtualization arrays and Nicira’s pre-VMware takeover initiatives with OpenFlow, OpenvSwitch and OpenStack. Even the vAppliance concept that is taking on a “software defined” spin has been around for several years. Having the data planes and control of what was a legacy hardware appliance now go via a virtual version is nothing new when looked at in the context of VMware vShield Edge firewalls or NetApp’s ONTAP Edge VSA. Looking behind the marketing smokescreen of ease of management & simplification etc. in reality though most if not all of these technologies were invested in and created to do one thing only and that was to take market share away from their competing vendors. By having all your legacy storage arrays or network switches now abstracted and consequently managed and configured by software that is provided by only one of those vendors, the control and future procurement decisions lie firmly in their park. So why do we need to take the software defined approach seriously if at all and what should be our focus if not the infrastructure products that “software defined” seems inherently linked to marketing?


Software defined is incredibly important and vital to IT and the businesses they support because it should bring the focus back on to what matters the most, namely the applications and not the underlying infrastructure.  A true software defined approach to infrastructure that considers the application as its focal point ultimately leads to infrastructure being treated as code where the underlying hardware becomes irrelevant. By configuring all the infrastructure interdependencies as code with an understanding that it needs to support the application and the various environmental transitions it will go through leads to a completely different mindset and approach in terms of subsequent configuration and management of infrastructure. In this case a converged infrastructure approach whereby infrastructure is pre-integrated, pretested and pre-validated from inception as a product ready platform for applications is most suited. Understanding the capabilities of what software defined really is, beyond the hyperbole of infrastructure vendors leads to practices where concepts such as Continuous Delivery, Continuous Deployment and Continuous Integration can take place leading to a radical transformation in the way IT delivers value to the business.
The focus of a Software Defined strategy should be the applications not the underlying infrastructure 

So if and when you do face a sales pitch, a new product line or an infrastructure savvy consultant that espouses to you how great and wonderful “software defined” is, there are several things to note and question. Beyond the workings of the infrastructure components how much application awareness and intelligence is there? How will this enable a DevOps approach and a quicker, more reliable and repeatable code deployment that will meet the requirements of the changing demands of your business? How will this also mitigate risk and ensure that applications will not just have their infrastructure resources automated and met but also their consistency in code from development to QA to an eventual live environment?


It is these questions and challenges that a “software defined” approach addresses and solves enabling significant benefits to a business. Once application code changes become reliable, automated and consequently frequent based on an infrastructure that meets the changing demands of its applications, a business can quickly gain a competitive edge over its rivals. Being able to respond quickly to market trends such as ensuring your website can cater for a sudden upsurge of transactions from its mobile version, or countering a sudden commodity price change etc. are all key to gaining a competitive advantage and consequently require an application delivery process that responds to those needs. A “Software Defined” approach can help businesses reach that goal by automating the time consuming,  human error processes linked with IT, as long as they don’t lose focus that it’s about the applications and not just the infrastructure that supports it.


The Converged Infrastructure Team: The Silo to end all Silos?

Back in the noughties I made the conscious decision as a Storage guy to immerse myself into what was being termed server virtualization and understanding the product offerings of a relatively new company named VMware. To this day I remember the incredulous looks and responses I received from my Storage counterparts, who were convinced that the VMware fad was nothing more than a system admin tool. Indeed the organizations that were early adopters of VMware ended up assigning the virtualization responsibility to the system admins team; at no point was there ever a thought that a dedicated virtualization team could or should be established. Fast forward to 2014 and virtualization teams are the norm and Storage administrators are constantly being hard pressed to have a better understanding of VMware as they provision and manage virtualized environments. Such a culture change was unthinkable 10 years ago, yet here we are again with the emergence of another silo, the Converged Infrastructure team.
 
If you've got an IT problem, if no one else can help and if you can find them, maybe you need to hire a converged infrastructure team...

This culture change became incredibly apparent at last years VMworld. Having attended every VMworld for the last four years, there was no escaping the significant change in the vendors, the products and the folks that were now in attendance. Gone were the numerous stalls of third party VMware management and monitoring tools and their vendors, almost resigned to the fact that vCenter Operations Manager had now monopolized that market. Gone too were all the VM labs that focused on the server virtualization features and in their place numerous labs that highlighted how VMware was now a cloud product. Even the third party vendors that were on display were now focused on orchestration, automation, self-service portals, service providers and everything else Cloud orientated. As for the traditional storage vendors, each of them was now presenting their storage arrays as part of either a reference architecture or converged infrastructure. If you belonged to either the Storage or virtualization silo there was little if anything on offer compared to previous years, if anything it was a wake up call that times were changing and changing fast.

As the converged infrastructure market continues to experience unprecedented growth there is now an inevitable evolution in how IT infrastructure is being procured, manufactured, managed and monitored. Akin to how desktop PCs evolved into slim and powerful laptops and portable tablets, IT infrastructure is experiencing a similar revolution. It wasn’t that long ago when you wanted to buy a PC you’d have to choose and order all the various components as single items i.e. the CPU, the RAM, the motherboard, the CD-ROM drive, the monitor etc. and then wait several weeks as the PC store would build and integrate all of those components together. In terms of support if anything went wrong with that PC, you’d have to go back to the PC store who’d then spend several weeks deciphering the issue as they went back to all the component manufacturers to either replace or diagnose the issue. Such an approach is now almost non-existent where instead an end user can simply purchase a preconfigured, customized laptop online as a single product that’s manufactured and supported by a single company. Converged infrastructure offers the same simplicity to what was once a complex approach to setting up infrastructure that’s ultimate aim is to support applications. Consequently the simplicity that converged infrastructure has brought to organizations in terms of time to deliver, speed of deployment and risk mitigation has led the same organisations to question their traditional silos of management and monitoring. 


Indeed such a change was exemplified to me in recent weeks with examples from two clients that I have been working with. The first client is a large organisation that has a multitude of silos and consequent processes and stumbling blocks to whenever they require new infrastructure for any new projects. With their current project they decided to bypass their internal silos and instead create a new team that would solely be responsible for a Vblock, i.e. their converged infrastructure team. In 45 days this team were up and running with their system delivering a service to the business, something that their traditional silos were taking nine months to achieve. The other client was a completely different case where a small team of seven was responsible for IT operations. Separated as silos of storage, network, compute, databases etc. this client made the step towards a mid-range 300 series Vblock converged infrastructure just over a year ago and immediately saw the benefits of accelerated deployment, performance optimization and risk mitigation. Recognising the potential their internal IT now had to open up new avenues for business, the same client only a few months later went on to procure seven more Vblocks. As for the team of seven there was no requirement for them to grow or change, in fact the only change that was required was their name; they’re now the converged infrastructure team.


DevOps: Mission Impossible? Live at VMworld 2013

Virtualisation needs to move beyond the business value of simply consolidating workloads and saving on power and cooling. Virtualisation now offers the ability for businesses to transform their operational models and more importantly their application lifecycle release management practices.

At the recent VMworld held in Barcelona I was lucky enough to present at the vBrownbag sessions to explain how DevOps is now a fundamental methodology for attaining a successful software lifecycle development plan. 

In this ten minute presentation, we address how the current challenges of existing methodologies and traditional set ups of segregated operations and development teams and how DevOps can enable true agility in the application lifecycle process. We also suggest a Private Cloud model that could essentially be utilised to transform current practices to a DevOps methodology and consequently generating significant value to business that are desperate to release new features and offerings in minimum time and at minimum cost. 

Video is below:


An autonomous self service portal for Development, that's managed and automated by Operations





DevOps: Making Application Lifecycle Management Painless


For most organizations application releases are analogous to extremely tense and pressurized situations where risk mitigation and tight time deadlines are key. This is made worse with the complication of internal silos and the consequent lack of cohesion that exists not just within the microcosm of IT infrastructure teams but also amongst the broader departments of development, QA and operations. Now with the increasing demand on IT from application and business unit stakeholders for new releases to be deployed quickly and successfully, the interdependence of software development and IT operations are being seen as an integral part to the successful delivery of IT services. Consequently businesses are recognizing that this can't be achieved unless the traditional methodologies and silos are readdressed or changed. Cue the emergence of a new methodology that's simply called DevOps.
The advancement and agility of web and mobile applications has been one of the key factors that have led many to question the validity or even practicality of the traditional waterfall methodology of software development.  The waterfall's rigorous methodology of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance in an age when the industry demands "agility" can almost seem archaic. While no one can dispute the waterfall methodology's relevance, certainly not companies such as Sony who suffered the embarrassment of the rootkit bug, but with web and mobile app releases needing to be rapidly and regularly deployed, can companies really continue to proceed down a long a continuous integration process?
Much of the problem stems from legacy IT people cultures as opposed to the methodology itself where each individual is responsible for their sole role, within their specific field, within their particular department. Consequently within the same company the development team is often seen as the antithesis of operations with their constant drive for change in needing to meet user needs for frequent delivery of new features. In stark contrast operations are focused on predictability, availability and stability, factors that are nearly always put at risk whenever development request a "change" to be introduced.
This disengagement is further exacerbated with development teams delivering code with little or no involvement from their operations teams. Additionally to support their rapid deployment requirements, development teams will use tools that emphasize flexibility and consequently bear little or no resemblance to the rigid performance and availability-based toolsets of operations. In fact it would be rare to find either operations or development teams being aware of their counterparts toolsets yet alone taking any interest in potentially sharing or integrating them.
Alternatively you have the operations team that will do everything they can to stall any changes and new features that are being proposed to the production environment in an attempt to mitigate any unwanted risk. Eventually when development teams are allowed to get their software release picked up by operations it's usually after operations have gone through a laborious process of script creation and config file editing to accommodate the deployment on a production runtime environment that is significantly different to the one used by development.
Indeed it's commonplace to see inconsistencies between the runtime environment the development teams have used to run their code upon (typically low resourced desktops) and the high resource server OS based environments utilized by operations. With development having tested and successfully run everything on a Windows 7 desktop, it's no surprise that once operations deploy it on a Unix-based server with different Java versions, software load balancers and completely different properties files, etc., that failure and chaos ensues during a "Go Live". What follows is the internal blame game where operations will point to an application that isn't secure, needs restarting and isn't easy to deploy while development will claim that it worked perfectly fine on their workstations and hence operations should be capable of seamlessly scaling and making it work on production server systems.
Subsequently this is what the panacea being termed DevOps was established to address.  DevOps from its outset works to push for collaboration and communication between the development, operations and quality assurance teams. Based on the core concept of unifying processes into a comprehensive "development to operations" lifecycle, the aim is to inculcate an end-to-end sense of ownership and responsibility for all departments. While the QA, development and operations teams have unique methods and aims in the process, they are all part of a single goal and overarching methodology. This entails providing the development team more environmental control while concurrently ensuring operations have a better understanding of the application and its infrastructure requirements. This involves operations even taking part (and consequently having co-ownership) of the development of applications that they can in turn monitor throughout the development to deployment lifecycle.
VCE Vblock: a standardised Application Lifecycle Platform for DevOps

The result is an elimination of the blame culture especially in the case of any application issues as both software development and operational maintenance is a co-owned process. Instead of operations blaming development for a flaky code and development blaming operations for an unstable infrastructure, the trivial and time consuming internal finger pointing practices are replaced with a traceable root cause analysis between all departments as a single team. Consequently application deployment becomes more reliable, predictable and scalable to the business' demands.
Additionally DevOps calls for a unified and automated tooling process. The evolution of web applications and Big Data has led to infrastructure needing to scale and grow considerably quicker. This means that the traditional model of fire fighting and reactive patching and scripting are no longer a viable option. The need for automation and unified tools whether for deployment, workflows, monitoring, configuration etc. is a must not just to meet time constraints but also to safeguard against configuration discrepancies and errors. Hence the growing awareness of DevOps has aided an emergence in the market of open source software that deal with this very challenge ranging from configuration management and monitoring tools such as Rundeck, Vagrant, Puppet and Chef. While these tools are familiar to development teams the aim is to also make them the concern and interest of operations.
Automation software such as Puppet Labs manages infrastructure throughout its lifecycle

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development and deployment. Despite this it's greatest challenge lies within people and their willingness to change. Both development and operations teams need to remove themselves from their short term silo-focused objectives to the broader long term goals of the business. That necessitates that the objective should be a concerted and unified effort from both teams to have applications deployed in minimum time with minimum risk. I've often worked with operations staff who have little or no idea of how the applications they're supporting are related to the products and services their companies are delivering and how in turn they are generating revenue as well as providing value to the end user. Additionally I've worked with development teams that were outsourced from another country where communication was non-existent not just because of the language barrier. As the demands from the business on IT rapidly increase and change so too must the silo mindset. DevOps is aiming at initiating an inevitable change; those that resist may find that they themselves will get changed. As for those that embrace it, they may just find application releases a lot less painful.

Vblock 200: Infrastructure Delivered Fresh to your Doorstep




When you haven’t got time to cook a meal there really is nothing like the convenience of picking up the phone and dialing for a home delivery kebab. You go online choose the meal deal you want and 30 minutes later there’s a ring at your doorbell. So when one of our customers asked if VCE could provide them with a segregated infrastructure to their Vblock 300 for an immediate service roll out the suggested meal deal was obvious, the Vblock 200. Literally delivered to their doorstep in 30 days, the emergency project was successfully rolled out and installed on a standardised infrastructure that was pre-configured and pre-validated with minimal risk. 


Delivered and packaged as a single product, the Vblock 200 is made up of Cisco UCS rack servers and switches and an EMC VNX unified storage system. As well as vSphere 5.1 enterprise plus and software such as PowerPath VE, the Vblock 200 comes with the unique ‘single throat to choke’ VCE Support offering.

As well as extremely tight time constraints, space was also a particular challenge for this customer’s project as this infrastructure was not to be housed at the customer’s usual enterprise datacenter. With only a single door for access to the infrastructure’s proposed location, the usual scenario would have been the rather painful process of having all of the components delivered separately and then built and integrated on site much like an IKEA flat pack wardrobe.


Delivered ready assembled - No allen keys required

Not so with the Vblock 200, as this single rack system while housing critical applications was also able to fit through the front door, leaving only for the power and connections to the core network to be dealt with. 


So next time you haven’t got time to build your own infrastructure for your remote office deployment, why not try ordering a Vblock 200? Not only can it fit through your front door, it’ll be delivered to your door, standardised, pre-integrated, pre-validated, pre-tested, fresh and ready within 30 days.





Infrastructure Pour Le Cloud

Click the picture to expand the article

Original article can be found in the ICT Journal: www.ictjournal.ch

VCE : We Save U Money - ViB

Well we were warned that "Le Vblock" was only part one and was to be continued...and sure enough here is the sequel and concluding part to the CIO story. Of course the ViB were courteous enough to send me the lyrics:


Yeah VCE, Vblock...Saving U Money
Go Tweet, Go Blog!

We got 1000 strong, Vblocks are Kong -- King of the infrastructure, perfect little architecture, sexy black box, where an application rocks, &#$%, ^#$%@ can ya -- virtualise &^#$? ignore their #$%@, you can save a bundle, from their licenses, enticing this, exciting this, would you like some tips, on how we optimise, well we standardise, on a single product, that's called the Vblock, We got 100s, 200s, 320s, 720's, any workload baby, from SMB to the enterprise, we got 'em mesmerized, with our saving size, lower OPEX costs, firefighting stops -- you know what... yeah - we're really hot.

We Save U Money

Verse 2, Now here's what we do, 4 our customers, and why they loving us, it's a single support, a single throat 2 choke, resolutions done before u even know, we take away the patches, take away the pain, take away the risk and let you breathe again, with a single matrix, a pretested fix, done inside our labs, so u upgrade fast,

Customers say " we're saving money" -- Cuz there's no more P1s, and there's no more reasons, to be petrified , at firmware time, because we've verified all the things inside, now concentrate, on applications, and gravitate 2 biz solutions, infrastructure's boring, now u can ignore it, while u might own it, VCE's honed it

We Save U Money

Press: "Is a Reference architecture like VCE?" -- er no
Analyst: "Is a @#$#^& company like VCE?" -- er no

There's nothing #$%@, when u market manure, if u wanna know for sure, then open up your doors, cuz the VCE crew will tell u what we do, we 'll deliver in days, what others only claim, production ready systems, pumping like pistons, VMs, Apps, Clouds u list em, everything u need with accelerated speed, with a service portfolio that'll make u dream, enabling your business, u can't resist this, accelerating projects, de-risked quickness, plug in datacenter, IT innovator, SLAs on, incidents gone, now your IT's hot, with all the savings u got....

So how does a 60% OPEX saving sound to you?
Yeah that's what I thought....

We Save U Money




Le Vblock : ViB - The Movie

So this link to the below video was waiting in my inbox this morning with the following lyrics and a note that this was Part 1.

Welcome to the ViB:

It's time for Change....

Are you ready to get your Vblock.....it's 30 days baby and it's on - it's VCE (x2)

Have you heard about the C.I. craze?
Transformation that will amaze
OPEX savings that's second to none
CIOs we can show you how it's done

Infrastructure that's rolled out in days
As a product that simplifies the ways
IT delivers to your business
Private Cloud? We can get you to this!


Ahhhhh VBlock - it's VCE- VBlock


All that pressure got you down
Trouble tickets spinning u round and round
Late night calls when patches don't go right
Finger pointing with no end in sight

With VCE it's a single number to call
A single product that's preinstalled,
Pre-integrated, pre-tested and what's more
A single matrix to upgrade it all


Ahhhhh VBlock- it's VCE- VBlock

Now feel the emergence of convergence

Now shake your body right down to your datacenter

All that pressure that made you cry
Is now replaced with an ROI
That can't be beat with an "always on" design
Technology that makes your business thrive

Accelerated & standardized
Consolidated & optimised
New Applications rolled out on time
A De-risked DC that's virtualised




Ahhhhh VBlock - it's VCE- VBlock

Shake it for me, shake it for me a reference architecture never did it 4 me

Gotta plug in 2 your core of your network baby & power it up now someone help me

Now shake your body right down to your datacenter



New Subliminal Message from the ViB

A cryptic email with the number 425-3355-89109 and a link to the short video below. Who are the ViB? With a voice like that I'm not arguing....



 

Will the ViB be at EMC World?




With EMC World 2013 now upon us and with myself unfortunately unable to go, I found it strange that I received yet another mysterious email from the ViB, this time the picture posted above. Is there a coincidence that this is coinciding with EMC World?  I'm not sure but I do know that if you are at EMC World make sure you get the the chance to visit VCE's Booth #425.

VCE have the following planned for your technical, intellectual and business-minded delight:
- VCE Booth #425 features product demos and over 40 in-booth theater presentations, with a prominent exhibit floor location.
- The VCE booth will feature live Vblock™ Systems, representing three product lines:
    • Vblock System 300
    • Vblock System 100
    • Vblock System 200
- Live Vblock Systems will also be present in investor company spaces:
    • Cisco Booth # 401
      • Vblock System 300
    • EMC Booth
      • Vblock System 300 in USD Booth
    • VMWare Booth # 201
      • Vblock System 100
- VCE maintains a presence within EMC program running at this event:
    • SE Conference
      • Attended by 3000+ EMC and Partner SE’s
    • Global Partner Summit
      • 3,000 partners – with high % of channel partners)
      • D. Martin, VCE VP of Global Channels, will present during the keynote
    • CIO Connect
      • This executive event hosts 75 Mid-Market CIO’s
      • Frank Hauck, VCE CEO, will serve as a panel member
- A formal meeting program on-site at the Venetian will facilitate customer and partner meetings with VCE leadership. Account managers will submit meeting requests.
Program availability:
    • Monday, May 6 from 12 - 5 pm
    • Tuesday, May 7 from 8 am – 5 pm
    • Wednesday, May 8 from 8 am – 5 pm
- VCE Booth #425 includes an In-Booth Theater.  VCE will host 15-minute theater presentations 2 times per hour, including:
    • VCE corporate overview
    • Customer presentations
    • Partner presentations
    • Six in-theater VCE presentations:
      • Workload Mobility Automation, a 3-way joint demo with Cisco, EMC and VCE
      • Desktop Virtualization (VDI), showcasing VMware Horizon, the next generation of VMware View
      • VCE Vision™ Intelligent Operations, which will feature VMware vCenter Operations as well
      • Data Protection and EMC RecoverPoint, presented by John Comeau, VMware SRM
      • SAP on Vblock Systems
      • IT Transformation
      • TCO Tool – Joint TCO Tool with EMC
      • Training
      • Services, including professional services and customer support
- VCE Breakout Session presentations:
      • Going Live with SAP on 100% Virtual Infrastructure – A Case Study of EMC’s SAP Deployment on VCE Vblock Systems, presented by Mike LaFauci
      • EMC Backup: Avamar and Data Domain – Backup and Recovery in VCE Converged Infrastructure
      • EMC Unified Infrastructure Manager – Ease the Transition to a Cloud Infrastructure on VCE Vblock Systems
      • EMC VNX Family – Leveraging Vblock Systems for Converged Infrastructure

So that leaves just one question.....who are the ViB and will they be at EMC World? 
Keep a look out out for neuralisers.

CIOs ask: "Who are the ViB?"



I've recently been inundated with a number of sporadic emails from a certain account named ViB. While I initially thought it was some kind of hoax I started to realise that this was a serious account as I was being sent exclusive information and industry tidbits specifically around how VCE's Vblock was enabling significant OPEX savings for customers as well as empowering CIOs to rapidly enable business transformation.

Amongst these interesting insights I was also sent a number of photos of cars with rather interesting number plates - see below:

ViB - vehicle 1
ViB - vehicle 2
ViB - vehicle 3

While none of this was really making any sense, it was this morning when I received a link to a new 40 second clip (see below) from the ViB where I was assured things would become clearer.

If anything what does seem to be clearer is that the ViB is most likely an acronym for "vArchitect in Black". Apart from this the 40 second teaser trailer seems to throw up more questions than answers!

- Are the ViB a new specialist team within VCE?
- Is this teaser trailer a precursor to a new documentary or feature film from VCE?
- Is this just a marketing stunt or hoax?
- Is this the precursor to another new product launch from VCE?
- Does this in fact have anything to do with VCE?
- Why are CIOs looking to VCE's Vblock as a solution to their problems and more specifically the ViB?

I'll leave you to decide by watching the clip yourself below and of course update you if and when I receive further information and clarification.






IT Infrastructure Debate in the Sunday Telegraph Newspaper

I was recently asked to take part in a debate for the Sunday Telegraph newspaper on the subject of "How will IT infrastructure evolve?". In case you missed it, it's now available online at:

...and for your convenience below. 
N.B. I have no idea who that picture is of, he doesn't even have my white hair or glasses (-;

The debate: How will IT infrastructure evolve?


I expect to see continued exponential growth in the use and emulation of internet data centres; the likes of Amazon, Rackspace, Memset or Google that have tightly packed racks, operating at very high efficiencies, running cloud computing services.The key difference in approach will be the widespread adoption of commodity hardware to deliver enterprise quality services by moving intelligence into software. Examples include distributed applications running on virtual machines, many cheap nodes crunching big data and more object storage and non-relational databases.Storage is especially exciting. I believe the days of “big iron” vendors, RAID5/6 and tape are numbered. Our enormously resilient, distributed storage system using commodity tin costs us less than £20 a terabyte per month. By layering media types, such as SATA disks, SSDs and DRAM, and mobilising tools including Automated Intelligence’s Datapoint, you can have your cake – cheap storage with low latency for critical data – and eat it.
Kate Craig-Wood
Managing director, Memset








A CIO’s infrastructure decisions will focus more on leading the business rather than simply aligning with it. Technologies such as unified communications, virtualisation and cloud computing will be further adopted to gain a competitive advantage while security and risk concerns have to be mitigated.  This requires an agile and flexible IT model that is shackled by traditional infrastructure and has left IT departments struggling with daily firefighting exercises. To ensure success, IT admin will need a new breed of infrastructure that enables them to focus on delivering, optimising and managing the application while not needing to worry about the infrastructure that supports them. Consequently the benefits offered by standardised, pre-integrated and pre-validated converged infrastructures will gain even more traction in the industry. This will not only present a dramatic paradigm shift in IT infrastructure but also the way IT is approached, managed, deployed and viewed by the application owners and business it supports. 
Archie Hendryx
vArchitect  EMEA, VCE

There can be no questioning that big data is a disruptive market force. The massive influx of data that’s impacting upon organisations of all shapes and sizes means that traditional IT infrastructure is becoming increasingly obsolete. Big data is the intensive analysis of large, complex, disparate or unstructured data sets to get actionable results in real-time. For many, to do this with an on-premise infrastructure will almost certainly lead to failure as few boast the necessary servers or computer clusters. Simply put, to execute big data analytics you need the suitable infrastructure to underpin it. Organisations need a massive amount of computing power to take all their data, wherever it’s stored, and analyse it for valuable insights. For most people at least, this leaves cloud computing as the most attractive option.  Being able to gain access to potentially limitless scalability through Infrastructure-as-a-Service via the cloud makes big data a possibility for one and all. As such, the evolution of IT infrastructure seems likely to be moving towards outsourcing.
Dominic Pollard
Editor, Nimbus Ninety