In my home lab, I use Synology for my shared storage solution. In this post, I would like to talk about what Synology devices I use and how I have them configured.
From time-to-time, I find myself needing to hot add a virtual disk to a VM and expand capacity. While the process has improved over time, I still occasionally run into issues on Linux. In the post, I will cover potential issues and provide a workaround.
I recently attempted to extend a filesystem on my Ubuntu system and ran into an interesting problem I figured would be worth sharing.
I was recently asked to explain a discrepancy between the reported live storage size reported by Log Insight versus the actual disk size listed on the Log Insight virtual appliance:
As you can see in the above screen shot, the delta is rather significant (almost 40 GB!). So what is going on?
While I do not frequently have the need to create or extend logical volumes, I find that every time I do I need to lookup the commands. To make it easier for myself and others, I have decided to put together the commands I frequently run.
I ran into a frustrating issue the other day when attempting to add multiple new virtual disks to an existing VM. Upon doing so, I received the dreaded “Operating System not found” message when the VM was started. While the underlying issue was easy to discover, the way to fix the problem became tedious due to issues with the vSphere web client. This post contains some tips and tricks to make the process easier.
I recently was doing some performance tests and wanted to try out the VMware paravirtual SCSI controller (PVSCSI). The VM I was testing was running SUSE, had the LSI controller configured, and was running VMware tools on a vSphere 5.5 environment. Every time I tried to change to the PVSCSI controller the VM stopped working. For those of you thinking about using the PVSCSI, below are some tips that should assist.
Behind NSX, VSAN was the next most talked about item at VMworld US 2013 and for a good reason. VSAN introduces the opportunity to change the way storage is used in the datacenter. VSAN also introduces a paradigm shift for VMware. Prior to VSAN, VMware encouraged the use of shared storage to leverage features like DRS and HA. The importance of locally attached disks was minimized even more with the support of boot from SAN and auto deploy. Now with VSAN and vFlash, local disks are important again, but is the change coming too late?
For those of you who read the MDS 9000 NX-OS update, 5.0(7), release notes, you may have noticed that the title of this post is one of the caveats that was resolved. I figured I would elaborate a bit more on this issue as I was directly involved in discovering it and I am directly involved in ensuring it gets fully resolved.
Over the last few weeks I was engaged in an issue where all VMs backed by a NFS datastore in an environment experienced several seconds to over two minutes of latency (i.e. inability to write to the filesystem) at approximately the same time. The write delay was so significant that it resulted in VMs crashing.
The configuration at a high-level consisted of ESXi hosts with NFS datastores connected to a pair of VNXs. The VNXs consisted of SATA drives and per EMC best practices was configured as RAID 6. In addition, FAST VP was enabled, which requires the use of storage pools instead of RAID groups. It should be noted that storage pools are currently stated as the best practice configuration as they ease administration and configuration as well as allow for more advanced options like FAST VP. The network between the devices converged to a pair of core switches.
Based on the symptoms it seemed logical to rule out ESXi as the hosts were spread over different switches, different vCenter Servers, and were running different versions of ESXi. Since the storage arrays were both impacted at approximately the same time it also seemed logical to rule out the storage arrays. This left the network and specifically the core switches. The issue was the core switches had been running stable and with the same software version and configuration for some time.
So what was going on?