It Isn't Over Yet For Fibre Channel Over Ethernet

Advocates of FCoE will immediately mention the benefits of simplified network management, elimination of redundant cabling & switches, reduced power and heat requirements, alongside enhanced data network performance yet despite that still today the FCoE revolution failed to take place. But as the enhancement of Lossless Ethernet begins to really take shape with its promise of a non requirement for TCP/IP overheads, CEE / DCE / EEDC are now looking like viable transports for storage traffic and storage fabrics.

From the moment 10 Gigabit Ethernet became a reality to the mainstream, the emergence of FCoE was a mere formality. Encapsulating Fibre Channel frames over Ethernet networks, FCoE offered the possibility of allowing Fibre Channel to use Ethernet networks while preserving the Fibre Channel protocol. With speeds close to Fibre channel, FcoE now offers companies a tempting cost effective alternative which would support both FC and Ethernet traffic over a single physical infrastructure. Hence storage and network traffic over a single set of cables, switches, and adapters, thus saving in the complexity of managing two physical networks and consequently saving on energy consumption and heat generation.

Furthermore by replacing the FC0 and FC1 layers of the Fibre Channel stack with Ethernet, FCoE allows seamless integration with existing Fibre Channel networks and m
anagement software due to the FCoE protocol specification retaining the native Fibre Channel constructs. Hence it comes as no surprise of the news that many vendors are now seriously developing marketing strategies and products which will incorporate the latest and supposedly improved versions of the FCoE standards.

As well as the SAN switch boys Brocade, Cisco and QLogic, vendors such as Emulex, Intel, PMC-Sierra, NetApp and EMC are all looking to develop and market FCoE with new FC and FCOE switches as well as CNAs. Indeed it is the CNAs (Converged Network Adapters) which are the magic behind enabling the connection between the host and FCoE by containing both the functionality of a FC Host Bus Adapter (HBA) and Ethernet NIC on the same adapter.

But FCoE does come along with certain snags as well the obvious one of not being as secure as FC. Firstly load balancing and thus optimal resource utilisation is still an issue due to Ethernet being a Layer 2 protocol and thus leaving FCoE to be unroutable. Hence currently multipathing is still not an approved option. Ironically the problem arises from the advantage FCOE presents with the disbandment of using both Ethernet for TCP/IP networks and Fibre Channel for storage area networks, in favour of one unified network. With Fibre Channel running on Ethernet alongside traditional Internet Protocol (IP) traffic, thus becoming just another network protocol, FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. Therefore as a result of this FCOE will fail to function across routed IP networks as it is unable to be routed at the IP layer.

Another concern is that once the marketing hyperbole of 10-Gbit is brushed away the truth that remains is that any storage traffic initialised at 10-Gbit will still get dropped onto an 8Gb native FC SAN or 4Gb in the case of most Cisco and Qlogic switches. This becomes more of a point to raise when put in the context that the Fibre Channel Industry Association (FCIA) are showcasing roadmap
s for FC which designate that FC will advance from 4->8->16->32 gigabit.

Despite this in an economic climate in which consolidation and cost effectiveness have become keywords, FCOE may be the option that once developed, tested and proven in the mainstream; most customers will be looking to scale out to. With the ability to not only reduce the NICs needed to connect disparate storage and IP networks but also the number of cables and switches as well as the power and cooling costs, FCoE's benefits could well be an option that most companies will now find hard to ignore.

How To Pass The VSphere4 Exam

Having installed VSphere4 on my laptop, cramming the white papers consistently for two months and attending the 'What's New' Course I thankfully passed the VSphere4 exam at the first attempt and hence upgraded my VCP3 status to VCP4. With the relief of passing the exam, I felt it might be useful to pass on some tips and advice to fellow virtual addicts who may be thinking of attaining the certification.

Firstly and most obviously you have to maintain your hands on experience, not just with VSphere4 but also ESX 3.5. I was surprised at actually how little has changed between ESX 3.5 and VSphere4 and it is easy to get caught up with the notion that this exam is only going to focus on the new features of VSphere4 leaving you to possibly neglect reminding yourself of the core skills and concepts you use on your ESX 3.5. While you definitely need to know about the new features such as Data Recovery, the Distributed Vswitch, VApps, Thin provisioning etc. the exam will still pull out some tricky questions related to core stuff such as HA, VMotion and VStorage.

So my tips in preparing for this certification is to firstly give yourself at least 2 - 3 months hands on time with VSphere4 even if you are well versed with ESX 3.5. There are many ways to stick VSphere4 on your laptop (just google ESX on a USB stick and you get some great results) and the Distributed VSwitch is also available on a 60 day trial basis as well as other features.

Secondly I think it's essential to go on one of the courses, although I must admit that the two day 'What's New' course did pack a lot in and could easily be extended to a 3 day session. The course material and labs offer you the chance to link mode datacenters, migrate virtual switches, use DPM on hosts etc something a bit tricky to achieve if you have limited facilities at your work site. I always find you learn more by making mistakes and troubleshooting - something which you can afford to do during the course labs on the course.

Thirdly as well as the course material and configuration manuals, you must download all the white papers and go through them. You will inevitably face questions that refer to information specificaly mentioned in the white papers and no where else, but the good news is that all of these are easily available for download from the VMware site. In addition to this a great book to use and one which will guide you through and also remain as a sound reference is Scott Lowe's Mastering VMware VSphere4 book.

Lastly and what I found to be the most useful of all resources are the blogs and sites of Simon Long and Scott Vessey. There you will find direct links to just about every document you need as well as great tips on how to get the best out of your VSphere4 platform.

In conclusion I preferred the VSphere4 exam to the previous VCP3 exam for several reasons - very few questions if any on silly memory games such as configuration maximums and more questions focused on knowledge of what you actually use and access on a daily basis in your virtual environments. My only gripe is that it's still a multiple choice exam and being a techy freak I long for the day when VMware will go the route of RedHat and offer a fully lab intensive exam - but then again that is probably why they also offer the VXDX!

1.6 Million Reasons To Look Closely At The F5100 Array

It's not often that I get excited over new hardware, well at least not on a daily basis but the new Sun Storage F5100 Flash Array was something I felt compelled to write about.

So what exactly excites me of what is in essence just a JBOD of SSDs? Could it possibly be the delivery of over 1.6 million read and 1.2 million write IOPS? Could it also be that an approximate 100K price tag for 2TB of SSDs works out at as a much cheaper alternative to flash drives which would have ordinarily been in a capsule residing at the back end of an Enterprise array? The answer is "yes" and then some. Not only is it a cheaper option but being based on Single Level Cell (SLC) non-volatile solid-state technology with 64 SAS lanes (16 x 4-wide ports), 4 domains and SAS zoning, the F5100 provides flexible and easy deployment as well as scalability.

In order to really take advantage of the high performance of this array the obvious step is to directly attach it to any read intensive databases such as a datawarehouse. Hence the first step would inevitably be to decipher which data within your read intensive database would be most appropriate to ascertain the maximum performance of the Flash array. To aid you with this SUN also conveniently provide a free Flash Analyzer which provides you those very results by analysing which LUNs are facing the most read IO activity.

So once the pinpointing of the appropriate data has been identified, ZFS can then be utilised to automate the data management and protection. Using the data-integrity features in Solaris ZFS, which checks and corrects block data level checksums, corrupt blocks can easily be identified and repaired. Another option is to look at the major overhead of the ZIL vol of the Zpool of a read intensive database, isolate it to the Flash array disks and see your performance shoot up.

Another nice addition is that the F5100 comes with the user friendly GUI Common Array Manager , (far simpler and quicker to grasp than say HDS' Storage Navigator or EMC's Navisphere. Therefore managing the disks as LUNs, checking their health status and viewing alarms is no different to the normal environment Storage Engineers face on a daily basis.

This is where I see flash drives really taking off and being adopted in the mainstream i.e directly attached SSDs which are a fraction of the price and offer the same if not better performance than there Back End Flash SSD counterparts. A nice product to coincide with the Oracle SUN partnership and an interesting challenge to the Storage vendors who have already sold their soul to the idea that Flash must exist on the Back End...