Adding Virtual Disks and “Operating System not found”

I ran into a frustrating issue the other day when attempting to add multiple new virtual disks to an existing VM. Upon doing so, I received the dreaded “Operating System not found” message when the VM was started. While the underlying issue was easy to discover, the way to fix the problem became tedious due to issues with the vSphere web client. This post contains some tips and tricks to make the process easier.

os-not-found

Continue reading

Switching to the PVSCSI Controller

I recently was doing some performance tests and wanted to try out the VMware paravirtual SCSI controller (PVSCSI). The VM I was testing was running SUSE, had the LSI controller configured, and was running VMware tools on a vSphere 5.5 environment. Every time I tried to change to the PVSCSI controller the VM stopped working. For those of you thinking about using the PVSCSI, below are some tips that should assist.

Continue reading

Why VMware VSAN Matters

Behind NSX, VSAN was the next most talked about item at VMworld US 2013 and for a good reason. VSAN introduces the opportunity to change the way storage is used in the datacenter. VSAN also introduces a paradigm shift for VMware. Prior to VSAN, VMware encouraged the use of shared storage to leverage features like DRS and HA. The importance of locally attached disks was minimized even more with the support of boot from SAN and auto deploy. Now with VSAN and vFlash, local disks are important again, but is the change coming too late?

Continue reading

Cisco MDS CPU utilization of 100 percent seen in certain situations

For those of you who read the MDS 9000 NX-OS update, 5.0(7), release notes, you may have noticed that the title of this post is one of the caveats that was resolved. I figured I would elaborate a bit more on this issue as I was directly involved in discovering it and I am directly involved in ensuring it gets fully resolved.

Continue reading

EMC FLARE <32 RAID 6 Implementation

Over the last few weeks I was engaged in an issue where all VMs backed by a NFS datastore in an environment experienced several seconds to over two minutes of latency (i.e. inability to write to the filesystem) at approximately the same time. The write delay was so significant that it resulted in VMs crashing.

The configuration at a high-level consisted of ESXi hosts with NFS datastores connected to a pair of VNXs. The VNXs consisted of SATA drives and per EMC best practices was configured as RAID 6. In addition, FAST VP was enabled, which requires the use of storage pools instead of RAID groups. It should be noted that storage pools are currently stated as the best practice configuration as they ease administration and configuration as well as allow for more advanced options like FAST VP. The network between the devices converged to a pair of core switches.

Based on the symptoms it seemed logical to rule out ESXi as the hosts were spread over different switches, different vCenter Servers, and were running different versions of ESXi. Since the storage arrays were both impacted at approximately the same time it also seemed logical to rule out the storage arrays. This left the network and specifically the core switches. The issue was the core switches had been running stable and with the same software version and configuration for some time.

So what was going on?

Continue reading