Why VMware VSAN Matters

Behind NSX, VSAN was the next most talked about item at VMworld US 2013 and for a good reason. VSAN introduces the opportunity to change the way storage is used in the datacenter. VSAN also introduces a paradigm shift for VMware. Prior to VSAN, VMware encouraged the use of shared storage to leverage features like DRS and HA. The importance of locally attached disks was minimized even more with the support of boot from SAN and auto deploy. Now with VSAN and vFlash, local disks are important again, but is the change coming too late?

Continue reading

Cisco MDS CPU utilization of 100 percent seen in certain situations

For those of you who read the MDS 9000 NX-OS update, 5.0(7), release notes, you may have noticed that the title of this post is one of the caveats that was resolved. I figured I would elaborate a bit more on this issue as I was directly involved in discovering it and I am directly involved in ensuring it gets fully resolved.

Continue reading

EMC FLARE <32 RAID 6 Implementation

Over the last few weeks I was engaged in an issue where all VMs backed by a NFS datastore in an environment experienced several seconds to over two minutes of latency (i.e. inability to write to the filesystem) at approximately the same time. The write delay was so significant that it resulted in VMs crashing.

The configuration at a high-level consisted of ESXi hosts with NFS datastores connected to a pair of VNXs. The VNXs consisted of SATA drives and per EMC best practices was configured as RAID 6. In addition, FAST VP was enabled, which requires the use of storage pools instead of RAID groups. It should be noted that storage pools are currently stated as the best practice configuration as they ease administration and configuration as well as allow for more advanced options like FAST VP. The network between the devices converged to a pair of core switches.

Based on the symptoms it seemed logical to rule out ESXi as the hosts were spread over different switches, different vCenter Servers, and were running different versions of ESXi. Since the storage arrays were both impacted at approximately the same time it also seemed logical to rule out the storage arrays. This left the network and specifically the core switches. The issue was the core switches had been running stable and with the same software version and configuration for some time.

So what was going on?

Continue reading

storage-group-hlu-gui

ESXi LUN ID Maximum

The VMware Configuration Maximums document is something I reference quite often. One configuration maximum that became relevant for me this week was under ESXi Host Maximums – Storage Maximums – Fibre Channel: LUN ID. According to the document the maximum LUN ID is 255, but what does that mean? Does it mean that you can have a maximum of 255 LUN IDs or the maximum LUN ID number allowed is 255?

For those who know the answer, let me explain where my confusion came from:

  1. Two items above LUN ID in the Configuration Maximums document is ‘LUNs per host’. The maximum for ‘LUNs per host’ is 256. Like most numbering in Linux (e.g. arrays) LUN IDs start at 0. This means LUN IDs 0 to 255 are valid and would total 256, the maximum number of ‘LUNs per host’.
  2. Looking at the storage side, a very important piece of information would be the maximum number of LUNs per storage system. For an EMC VNX7500, the maximum number of LUNs (including private LUNs) is 8192. Since every LUN has to have a unique LUN ID this means on a VNX7500 at a minimum the LUN IDs 0 to 8191 must be valid.

So why was I looking at this maximum in the first place?

Continue reading

DAE and LCC connectivity

EMC storage, DAE failures, and vertical striping

EMC’s best practice for creating storage pools and RAID groups on mid to low end storage arrays (e.g. CLARiiON or VNX) has always been to create them using disks from a single Drive Array Enclosure (DAE). This configuration is sometimes referred to as horizontal striping. You may be wondering why this was the case as a complete DAE failure would result in data unavailablity for all pools/groups on the DAE. As such, you may be tempted to create pools/groups across multiple DAEs, which is sometimes referred to as vertical striping. To understand the reasons you need to first understand the physical configuration of EMC mid to low end storage arrays and then some DAE failure scenarios.

Continue reading

Creating a LVM filesystem

This post is more for more reference, but hopefully someone else finds it useful as well. There are several posts online about LVM, but the directions I follow are:
http://www.debuntu.org/how-to-install-ubuntu-over-lvm-filesystem