Quite a few cool posts on home labs lately triggers this post concerning vSphere Homelabs in 2014 – designs and evolution. There was the first one I spotted on Erik Bussink's blog yesterday which talks about Shift in homelab designs. Erik is right when saying that the requirements for homelab are exploding. And now he just posted a new post on his homelab upgrade for 2014, check it out.
As VSAN does not support AHCI SATA standard, the only way to go is via external IO cards or get MoBo with supported controller (case some of the Supermicro server boards). But if you don't usually design your homelab with server hardware you might want to check my post My VSAN journey – part 3, where you'll see how to search the VMware HCL page for compatible IO storage controller.
Note that you don't have to ruin yourself bying IO card as I picked used ones on eBay for $79 each – an IO controller which is on the VSAN HCL. It might not be the one which is the most performant, but it's the one which was on the list of VSAN HCL IO cards since the early beta. Check out Wade Holmes's blog post at vSphere Blog.
As for now, concerning my lab, I tried to upgrade instead of replace. I think it'll be more difficult in the future as two of my Nehalem i7 hosts built in 2011 are maxing at 24Gb of RAM and you can be pretty sure that 24 Gb RAM per host will not give you much to play with as more and more RAM requirements will be necessary. I can still scale out, right… Put another VSAN node, if in the future I'll need more RAM and compute.
The second point Erik is depicting in his article is the network drivers support. With vSphere 5.5 many of the drivers aren't included as VMware is moving to new Native driver architecture. So the same punition, you must search the HCL for supported NICs which will be used as add on NIC cards for PCI or PCI-e inside of your whitebox. That's what I'm doing as well. I got gigabit NICs which are on the HCL. The trick to installing a VIB for built-in Realtek NICs worked until now, but how about vSphere 6.0?
Now I want to stress further my thoughts on what future design to pick up? You might or might not agree, but you must agree on one (painful) point. The homelab is getting pricey….
First I had single host, then two, three and now I'm running 4 whiteboxes. Not everything is up and running yet. When all the parts I ordered will show up I'll be able to run two cluster design with first cluster – single host as a management server (hosting vCenter, DC, vCOPs, backup server etc…) and second cluster with three hosts for VSAN.
Like this I can “mess” with VSAN and do break/reinstall whatever I like. The goal is to finally have my VSAN back-end traffic (with speedy 10 + 10 Gigs connections) hooked to my Topspin 120 Infiniband switch. (Note that I also ordered more quieted fans on eBay. The device is just “blody” noisy) …. As for now I only tested 10 Gig speed between two nodes with direct connection as you can see in this video……
Quick note concerning the compatibility of the Infiniband PCI-E 4X DDR Dual Port Storage Host Channel Adapter cards. The cards showed up out of the box with the latest vSphere 5.5 U1 release. No tweaks were necessary.
The environment is (still) hooked to an older Drobo Elite iSCSI SAN device via small gigabit switch on separate gigabit network. Mostly for backup purposes and also for testing the PernixData solution. I'll be getting also larger Gigabit switch as my SG300-10 which I was using until now is great, but is lacking ports… -:). So I think that I just go for his bigger brother SG300-28 which is fanless. (the SG-300P is not fanless).
So the future is scale up or scale out?
Erik Bussink and Frank Denneman both in their posts are going for Supermicro Motherboard builds based on SuperMicro X9SRH-7TF. Nice server motherboard with socket 2011 with a possibility to scale up to 128 GbB of RAM (8×16 Gb RAM). Also the built in LSI 2308 IO controller is on the VSAN HCL which makes the choice almost no brainer, but be ready to pay some cash for it as the board itself is priced about €482. Frank has listed the node (without spinning storage) for about €1800. (with 64Gb of RAM). Oh btw, if an IPMI is something that would change your mind, the board has an IPMI.
In my series of articles that I called with the fancy title My VSAN journey I try to re-think the design of a low-cost VSAN cluster with (if possible) with consumer parts. The vast majority of home enthusiasts might not be able to invest between €1800 and €1900 per node…. and many folks do have some kind of Whitebox in their lab anyway, so why not just to transform those for VSAN?
Scale out? My criteria were Low power and no noise when I originally went for a consumer based Haswell built, which has enough power with quad core i7-4770s which does have VT-d, 8Mb of cache and hyper-threading. See Intel's ARC site. Now I'll add the IO card and another SSD, which will bring the costs from the original $700 to something like $1000-$1100 per node. (note that for now I'm still using my old Nehalem i7 boxes) . Yes, the 32Gb RAM barier on Haswell builts is there, no Xeon powerfull CPU and no iPMI. Can you live without?
I know that no hardware is future proof, but until now stayed with consumer based whiteboxes and hardware as the server hardware is more pricey. I haven't had a SSD failure even if the lab is on 7/7 and I do many sort of things. Is the scale out way to go? If you're on the budget, yes. You can build a $1000 node quite easy without the wife going through the ceiling…
Scale up? means bigger investment in RAM, CPU and Motherboard. As you could see the entry ticket is roughly €1900 per node. The minimum 3 nodes for a homelab design might refrain some from going this way even if it's relatively future-proof. But who knows, 12 months from now the prices might be divided by half.
One could still go and buy consumer based board based on socket 2011, Core i7-4820K (4cores) or Core i7-4930K (6cores), then with 64Gb of RAM per node. I've was thinking that this might be a way to go at one point….. but for now I'm trying to use what I got. We'll see when vSphere 6.0 will be released…
Thoughts
I can't give straight answer, there is many parameters (as usually) and many factors influencing homelab whitebox builds. A choice of supported NIC, IO storage card is definitely to look at much closer now than was the case two years ago. The low power builds are still my priority so the starting point is definitely the CPU which will play big role on the overall consumption if the box will be up 7/7. I know that the best solution for a really low power VSAN lab would be by going nested way… -:)
Links:
- Erik Bussink
- Frank Denneman
- Wade Holmes (IO cards for VSAN)
Nick Morgowicz says
Hi Vladan,
You know, an option for storage and high bandwidth could also be 4gb fiber-channel. As business move up to 16gb fiber-channel, 4gb hardware has gotten dirt cheap.
To give you an idea how much it cost me for my purchase last month, i got a Cisco MDS-9134 fiber-channel switch, which has 24 ports licensed to use active connections, for $250 off ebay. That cost included 24x 4gb Cisco SFP modules, but even if it didnt, you can get the SFP’s for $4-12/ea.
I will be building some instructions on how to build and configure everything, using OmniOS (or any new Solaris) + the napp-it software as a fiber-channel target soon, but you need to use QLogic HBA’s to be able to easily change the functionality of the driver from initiator to target. These cards were $15 for single port up to $25-35 for dual port, and model is QLE-2460 for single port and QLE-2462 for dual.
Last thing you need is LC-LC cable, and the cheapest is .62 micron stuff, which can still handle full 4gb for up to 50m. If you want to spend more money to get .50 micron cable, you can go higher bw for longer distances, but it was ~$10-15 for 1m-3m length and $20 for 10m.
Other than that, you need to know how to create a cisco fiber channel fabric, but as i learned, it’s not very difficult. Then you use ZFS to create zvol’s and the COMSTAR iSCSI/FC components of Solaris to tie it all together. All disks show up as “Sun Fiber Channel Disk”, and setting up ALUA+round robin to use multiple ports is as simple as zoning an additional HBA to the target.
I’m really impressed with FC, especially knowing that no matter how i build my homelab out in the future, i can go tall or wide and still have plenty of port density on my fabric!
Vladan SEGET says
Looks like another nice (cheap) alternative to high speed storage network for homelabs. Infiniband is even more easy as the new vSphere 5.5 U1 does have the IB drivers baked in and the cards shows right after in the UI – no driver install necessary.
Whenever you want to share your guide with a public, and if you don’t have a blog, contact me for a guest post! I think our readers will be delighted -:).
Nick Morgowicz says
Sorry, i didn’t make it clear about the driver part of it. Much like IB, FC has a lot of built-in in the box support from OS vendors too. When you install the cards, whether you are using Microsoft Server 2012 R2, ESXi5.5 or 5.5UR1, or even OmniOS, the QLogic QLE2460 or 2462 drivers that the vendor provides will have your card functioning and recognizable without making any changes at all.
What you have to change is on the Solaris side, where you will change the driver file attached to your HBA to go from qlc (qlogic card?) to qlt (qlogic target). FC works similarly to iSCSI, in which you will have 1 target system that serves your store out, and all receiving systems are initiators.
I believe even when i went to download newer drivers to install in ESXi, VMware had put the latest build of the driver in the 5.5UR1 image. If you wanted additional monitoring, there was a CIM software pack for the QLogic card that you could install, but i dont know if it’s not working right in 5.5UR1 or if i didnt do something right, because i don’t show anything more in my hardware provider section after installing that CIM vib.
The QLogic (or any vendor’s) FC HBA will show up under the storage adapters setting.
I’ll probably take you up on your offer for a place to publish the instructions, but it’ll probably be a week or two before i’m ready. I’ve created a high level outline already, but between our US Tax Day coming up and my being on-call at work, the next week and a half will be a little hectic.
Thank you again for the support!
Jason Shiplett says
For those of us who switched over to VSAN for our storage, IB is still the best option for price/performance.
Even in a situation where you weren’t using distributed storage, I’d argue IB makes a better value prop with affordable 10Gb vs. 4Gb FC.
Tyson Then says
You don’t even need a Fibre Switch. I initially had a 16 Port activated Brocade 200e but it was too noisy at 1 RU. You can do a direct connection between host and SAN (like a crossover). My Whitebox Nexenta CE SAN All Flash Array has two Qlogic 2464 Cards (8 Ports total). My 3 hosts have a Qlogic 2464 HBA each. I have 3 Paths for 2 Hosts and 1 host has 2 Paths. I get some really good speeds.
The spare port on the hosts are connected to my Tier 2 storage which is just a 4 drive RAIDz with SSD caching on another Nexenta CE SAN with QLogic 2464.