Hooray, I made it through the class! Now it is time to study and take the VCP test.
What I learned:
- If you have a VM with the vnic disconnected and you enable FT on the VM then the only way to enable the vnic is to enable the vnic, power off the VM, and power back on the VM. Just connecting the vnic or just rebooting the VM is not sufficient. This appears to be true for any setting change to the VM.
- vCenter Server Heartbeat can work over a WAN even if the primary and secondary vCenter Server instances have different public IP addresses. This is a very cool feature and hopefully can be spun off to allow for a supported way to easily change the IP address of a vCenter Server instance.
- vDR only supports file level restores from the command line. In addition, vDR only allows backups from the Hosts and Clusters view. This means you cannot backup based on the folders under the VMs and Templates view. One other note is that if a vDR VM were to die, another vDR could be brought up and attached to the backup disk of the previous vDR instance allowing restores. vDR is a great alternative to VCB. VCB will not longer be supported after vSphere 4.1.
- By default ESXi partitioning creates a 4GB VFAT scratch partition for system swap. If the scratch partition is not enabled, the host will use an additional 512MB of physical memory on the host. This is different than ESX where the recommendation is to create a swap file that is twice the size of the memory allocated to the VMkernel.
Clarifications I made:
- Instructors: If you forget the root password on ESX(i) then you need to reinstall ESX(i).
Comments: While this is one option, it is not the only option. In the case of ESX, you can boot the server into single user mode where you can change the root password. In the case of ESXi, this is not an option. As such, for ESXi the VMware recommended solution is to perform a repair install on ESXi, which will reinstall ESXi, but preserve an existing VMFS datastores. The downside to this approach is that all VMs will need to be re-registered with the ESXi host. Alternatively, a Linux Live CD can be used to mount the ESXi partitions whether it is the embedded or installable version. It is important to note that making the root password blank (e.g. when resetting the root password) will result in all NFS datastores becoming disconnected. This is because NFS requires root authentication to work with ESX(i).
- Instructors: VUM requires a completely separate database from vCenter Server.
Comments: This is not true. vCenter Server and VUM can share the same datastore though VMware best practice is for each to have a separate database. This is explained in the ESX(i) installation guides (e.g. http://www.vmware.com/pdf/vsphere4/r40/vsp_40_esx_vc_installation_guide.pdf and http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esx_vc_installation_guide.pdf):
vCenter Update Manager can use the same database as vCenter Server, but VMware recommends that you have separate databases for vCenter Server and vCenter Update Manager
In my opinion, just because you can does not mean you should. My recommendation is to always separate the two. Also, just as an FYI, separate databases does not mean separate SQL instances.
Questions I raised:
- With vCenter Server Heartbeat, can you have the primary vCenter Server instance with SQL and the secondary vCenter Server instance with SQL Express? They did not know. I believe the answer is no, but even if you could this would not be a best practice. I can understand from a cost perspective, but if you need vCenter Server Heartbeat and can afford it then you can likely afford the another SQL license if required. Also, I believe two Windows licenses are needed, but a single SQL license is sufficient.
- VUM allows you to control the schedule of patch deployment as well as force powered-off and suspended VMs to power on for remediation. What happens if a cluster has DPM enabled and an ESX host is in maintenance mode (i.e. will the host be powered on for remediation)? They said VUM will not force a host out of standby mode and as such hosts with DPM enabled are not guaranteed to remediate. With that said, a host may come out of standby mode during a remediation of a cluster because a host being remediated must be in maintenance mode, which may require additional resources in the cluster because of current load or HA requirements.
© 2010, Steve Flanders. All rights reserved.