Skip to content

Month: April 2010

ESX + NFS Datastores

Over the last week, I have been in the process of applying the latest patches to one of the VI3 environments I manage. While looking for potential problems I noticed that a single ESX server had lost access to all of its NFS datastores. All other VMs in the cluster, which connected to the same NFS datastores, appeared to be connected properly. I restarted the management services on the node hoping to fix the issue and continue with the upgrade. Unfortunately, restarting the management services had no effect (remember while restarting the management services should be one of the first steps and does solve a lot of VMware issues, it is not the only step). I verified that the host was configured properly and that no configuration changes had recently taken place. I also had the networking team verify that the switch ports were configured properly.
All checks came back normal, so what was going on?

Forcing VMware Update Manager to download patches

While many people may already know how to do this, I always seem to forget so I thought it was worth blogging about. VMware Update Manager is a Windows service and plug-in used to download and install the latest host and OS patches. Installing the server component starts a service, but otherwise does not contain any direct executables that need to be run. If you navigate to the install directory C:Program FilesVMwareUpdate Manager you will notice several available executable files. It is my understanding that these executables are called by the Update Manager client in order to perform downloading and installation operations. Unfortunately, these commands are not documented and the help menu provided with them is difficult at best to follow. In addition, many people in the communities say not to run these executables for any reason and instead to rely on PowerShell commands to perform operations.
While I am a big fan of PowerShell and the VMware cmdlets available, I was just looking for a quick, one-time download of the latest patches. Because of this, I decided to turn to the VMware Update Manager client to see if it provides any way to force a download of the latest patches.

Using VMware Update Manager without DNS

I had an interesting challenge this past week. I had a development environment with a vCenter Server instance hosted on a private network. This private network connected to a corporate network connection, but had no direction connection to the Internet. My task was to configure VMware Update Manager such that it could download patches from the Internet. Sounds simple, no?
I had one of my networking colleagues configure a NAT such that the private IP address of the vCenter Server would be granted access to the corporate network. Upon doing so, I could access both corporate and Internet websites via IP address. As it turns out, VMware Update Manager requires DNS resolution in order to function properly. If you attempt to download patches without DNS resolution configuration properly, VMware Update Manager will fail with the following error message, “Metadata download failed.” I find this amusing as several VMware services including vpxa only function via IP address.
The problem I had was that the connection to the corporate network was not allowed to communicate over port 53. As such, I could not get DNS to function either through internal or Internet resolvers. This is typically not that big of a problem as DNS can be faked by updating the Windows hosts file located at C:Windowssystem32driversetchosts. The issue with this approach was I did not know the URLs necessary for VMware Update Manager to function.

DCOM Error

Today, while working on a Windows jump box I received a Windows error message stating that DCOM had shutdown unexpectedly and as such the computer was going to reboot in 60 seconds. I was not the only person on this jump box so I thought nothing of it. Once the 60 seconds had elapsed, my laptop, and not the jump box, restarted.
While this was weird, I thought nothing of it since I had been experiencing Windows issues with the device for some time. The computer rebooted and I logged back in. I decided to download the latest updates for my computer, but while loading the Windows update site the same error message appeared and after 60 seconds my computer rebooted again.
What was happening?

Unable to apply DRS resource settings on host

While looking for a VM on a production vCenter Server instance the other day, I noticed that an ESX host had a red exclamation point on it. I suspected that VMware HA would need to be reconfigured on this host as I have seen this happen from time to time. Upon selecting the ESX host I received the following error message:

Unable to apply DRS resource settings on host (Reason: The name ‘<resource_pool>’ already exists.). This can significantly reduce the effectiveness of DRS.

I have to admit this was the first time I have ever seen this error message. In addition, upon thinking about what the error message actually meant, I did not understand how this problem could occur. At first, I tried to rename the resource pool in question, but the error message just changed to reflect the new resource pool name I had assigned. I did not want to disable and enable DRS as this would remove all the resource pools I had configured.
Any idea how to fix the issue?

EDA: No COS NICs have been added by the user

I use the ESX Deployment Appliance (EDA) for all of my automated ESX(i) installation needs. The other day, I was attempting to configure an automated installation of ESX 4 via the EDA, but the installation kept hanging after displaying the message: No COS NICs have been added by the user. I switched to a console screen (Alt+F2) and began checking the weasel.log in /var/log. I quickly noticed that the server was attempting to PXE off of vmnic0. In ESX 3.5, I had configured the ksdevice value on the EDA to ethX. Based on this information, I updated the ksdevice field on the EDA. Upon attempting the installation a second time the same message appeared and again the installation hung. This time weasel.log was of little help as it still claimed to be booting off of vmnic0. Why was this not working?