Skip to content

Month: March 2013

VCAP-DCA5 KB Articles

I have been studying for my VCAP-DCD and I must say the amount of material referenced for the exam can seem overwhelming. While going through official documentation and whitepapers is fairly straightforward, I find locating relevant KB articles to be quite the task. As such, I have put together a list of KB articles I think are worth going over. Instead of breaking down the KB articles by objectives I have decided to list them by component.

Cisco MDS CPU utilization of 100 percent seen in certain situations

For those of you who read the MDS 9000 NX-OS update, 5.0(7), release notes, you may have noticed that the title of this post is one of the caveats that was resolved. I figured I would elaborate a bit more on this issue as I was directly involved in discovering it and I am directly involved in ensuring it gets fully resolved.

EMC FLARE <32 RAID 6 Implementation

Over the last few weeks I was engaged in an issue where all VMs backed by a NFS datastore in an environment experienced several seconds to over two minutes of latency (i.e. inability to write to the filesystem) at approximately the same time. The write delay was so significant that it resulted in VMs crashing.
The configuration at a high-level consisted of ESXi hosts with NFS datastores connected to a pair of VNXs. The VNXs consisted of SATA drives and per EMC best practices was configured as RAID 6. In addition, FAST VP was enabled, which requires the use of storage pools instead of RAID groups. It should be noted that storage pools are currently stated as the best practice configuration as they ease administration and configuration as well as allow for more advanced options like FAST VP. The network between the devices converged to a pair of core switches.
Based on the symptoms it seemed logical to rule out ESXi as the hosts were spread over different switches, different vCenter Servers, and were running different versions of ESXi. Since the storage arrays were both impacted at approximately the same time it also seemed logical to rule out the storage arrays. This left the network and specifically the core switches. The issue was the core switches had been running stable and with the same software version and configuration for some time.
So what was going on?

Plug-ins are disabled in the vSphere Client

I recently connected to a vSphere 5.1 environment using an old Windows Fusion VM. I was attempting to check the hardware status and I noticed that the tab was not available. I checked the plug-ins menu and noticed the following error for the hardware plug-in:

The client could not send a complete request to the server

What was going on and how can you fix it?
vsphere

BUG ALERT: UCS FNIC Driver 1.5.0.8 + ESXi 5.x = PSODs

If you are running UCS hardware with ESXi then you should be using custom ENIC/FNIC drivers as specified on the Hardware and Software Interoperability Matrix. If you are running ESXi 5.x and leveraging the FNIC driver than I would highly suggest you look at which version of UCS you are running and what version of the FNIC driver you are running.

Likewise / PowerBroker: DNS_ERROR_BAD_PACKET

If you use PowerBroker, formerly known as Likewise, then you may have experienced the following error message when performing the domainjoin-cli command: DNS_ERROR_BAD_PACKET. If you look online may people have come across this issue and have posted their version of a resolution. I have experienced this issue in the past as well, however my online searching did not readily turn up the solution to my problem.
So how can you resolve this error message?