It was a sad day when I realized my macbook pro was more powerful than my home lab server. However, it was the fact that the RAID card on my home lab server could not perform on ESXi 5.x that finally put me in the market for some new gear. So the question is what did I get and why?
In my experience, resource requirements for virtualization projects can be prioritized as:
- Memory – you would be amazed how much memory some applications require. In addition, home labs tend to require more and more VMs for testing purposes. I find that memory is the number one system constraint. The good news is that <8GB DIMMs are cheap. When targeting memory capacity I would not look for anything under 16GB and I personally prefer at least 32GB.
- Storage and more specifically IOPs – SATA drives are cheap and come in large capacities, but they do not perform well by themselves. The real key to performant storage is IOPs. I find once you have enough memory, storage will become your next bottleneck. A good RAID controller with on-board cache is the way to go. While a BBU is even better, you need to weigh the priorities of cost vs. performance.
- CPU – it is rare to find that CPU is your bottleneck. With that said, I would typically recommend a dual processor system for the most optimal results in any environment.
In my experience, trouble spots for virtualization projects can be prioritized as:
- Network – network people do a great job, but they do not have an easy job. Almost everything relies on the network so it is the last thing you want to be having problems. Especially in home lab environments, I would advise following the KISS method when it comes to networks.
- Firewall / Load Balancer – while these might technically fit in the network section, I think it is important to highlight them separately. Misconfigurations, configuration maximums, and complex, multi-layer issues can make firewalls and load balancers a nightmare. With that said, they play an important role and are inevitable in most environments. Again, I would advise following the KISS method here.
- Storage – this is important for obvious reasons. In terms of virtualization, shared storage introduces some interesting issues as a large number of systems start to rely on a small number of devices. As mentioned earlier, IOPs plays a crucial role with storage. In a budget or resource constrained environment some kind of QoS is typically necessary to help guarantee an acceptable level of performance.
- Systems – believe it or not, systems such as infrastructure services and guest operating systems are typically not the issue. I find if a system becomes an issue it is typically because it is a singleton (e.g. has not redundancy) or has a complex redundancy model (e.g. VMware Fault Tolerance). In either case, it is uncommon for multiple systems to have the same problem at the same time due in part to the shear number of systems. With that said, when something goes wrong with systems the end result is usually not good.
Based on the above factors, I specced out some new hardware for my home lab. I was trying to be budget conscious while also ensuring a relatively high amount of performance potential. In addition, the ability to grow the hardware was important to me. After doing some research, I decided on the following hardware:
- Dell 12G PowerEdge T620 – dual 6-core CPU, 64GB of memory, RAID card w/512GB cache, 4 x 500GB SATA drives
- Synology 713+ (has VAAI support) with two 3TB SATA drives
Now the server cost can be cut significantly with a couple of tweaks (e.g. single 6-core CPU with 32GB of memory), however the place where you can get real cost savings is in computer, in this case Dell, support. I understand this is a slippery slop and people have good reasons for and against support. In my experience, hardware nowadays is pretty stable. The most common hardware failure I see today is hard drives and even those are relatively low if the drives/server are treated well (e.g. temperature controlled). As such, I typically opt for the most basic support and hope the hardware does not die after the first year.
All in, the above configuration cost me less than $4,500 and if I had opted for cost savings (i.e. the tweaks mentioned above) I could easily get it under $3,000. Both scenarios are an investment for sure. I have a couple notes for those interested in purchasing a home lab or upgrading an existing one:
- Dell seems to offer some of the cheapest desktops on the market today and you can often find promotional codes to reduce the price even further
- When it comes to purchasing a server, CPU speed really does not matter. What I look for in a processor are the features the chipset supports and hyper threading. As for storage, if you go with a RAID controller then I highly recommend avoiding the low end controller and opt for something with at least 512GB of cache.
- Synology is one of the best NAS devices on the market today. It is not cheap by any means, but does offer a great set of features and options and has a great community following.
- Western Digital seem to offer some of the cheapest hard drives on the market today. That combined with frequent online deals from places like Amazon can land you really affordable capacity.
I look forward to the new hardware coming in and finding the time to bring it online!
© 2013, Steve Flanders. All rights reserved.