For those of you who read the MDS 9000 NX-OS update, 5.0(7), release notes, you may have noticed that the title of this post is one of the caveats that was resolved. I figured I would elaborate a bit more on this issue as I was directly involved in discovering it and I am directly involved in ensuring it gets fully resolved.
Over the last few weeks I was engaged in an issue where all VMs backed by a NFS datastore in an environment experienced several seconds to over two minutes of latency (i.e. inability to write to the filesystem) at approximately the same time. The write delay was so significant that it resulted in VMs crashing.
The configuration at a high-level consisted of ESXi hosts with NFS datastores connected to a pair of VNXs. The VNXs consisted of SATA drives and per EMC best practices was configured as RAID 6. In addition, FAST VP was enabled, which requires the use of storage pools instead of RAID groups. It should be noted that storage pools are currently stated as the best practice configuration as they ease administration and configuration as well as allow for more advanced options like FAST VP. The network between the devices converged to a pair of core switches.
Based on the symptoms it seemed logical to rule out ESXi as the hosts were spread over different switches, different vCenter Servers, and were running different versions of ESXi. Since the storage arrays were both impacted at approximately the same time it also seemed logical to rule out the storage arrays. This left the network and specifically the core switches. The issue was the core switches had been running stable and with the same software version and configuration for some time.
So what was going on?