I finally have a new home lab server!
Getting a new lab server was so long overdue. I’m so happy it is here so I can start BLOGGING again. I’ve missed it over the past few months since joining VMware last May. I’ve been super busy with onboarding and getting acclimated with my customers that all this took a back seat for a little while. Everything at VMware is going better than expected and now I finally have my hands on this!
We all know there are a lot of “home labbers” out there in the tech world. Every home labber goes through the same thing at some point. Where do I start? What should I use? What’s recommended? What should I avoid? That list of questions can go on forever but at some point you need to just pull the trigger. Pick something and roll with it.
I’m the type who likes to keep things simple. There are a lot of home lab builds that I have seen online that would rival some nuclear missile silos or something HAHA…I don’t need that for what I want to do. But if that’s the type of lab you want I say go for it, go crazy and most importantly have fun with it. I enjoy seeing the labs and all the creative ways people do things. I’m a huge fan!
My Bill of Materials
Let’s cover something really quickly before we get into the bill of materials. Why the SuperMicro E300-9D SuperServer? I’ve been in the market for a new lab server for a few months now (after moving in September). There were two main things I wanted to it have…it had to be cost efficient and dense in terms of resources (CPU, memory, disk, etc.). I want to be able to do a lot with it without breaking the bank! Fast forward a few months and I came across William Lam’s blog article on the Supermicro E300-9D. Awesome write up and after reading it a few times I was sold on the E300-9D. This was the lab server I had to get my hands on.
- It was cost efficient for labbing and small form factor.
- It can be very dense…up to 512GB of memory can fit in this server!
- I can load up on some very fast flash storage…NVMe!
- Come’s with a lot of onboard networking including 10GbE if you buy the SFP+ modules.
- It’s on the VMware HCL!!!
Very dense and I can expand my resources in the future and add things that I don’t necessarily want or need out of the gate. Scaling resources is always nice isn’t it?
So what did I settle on and how much did it set me back? Here is my final Bill of Materials! The prices listed below is what I acquired them for. Good chance these prices are different on the links provided. I also used the PCI-e NVMe adapter card in my server to accommodate the additional storage.
- SuperMicro SuperServer E300-9D-8CN8TP – $2,485 USD
- Skylake-D 8-core CPU (D-2146NT @ 2.30GHz)
- 32Gb 2Rx4 Pc4-19200 ECC-R = 128GB total memory
- Samsung 970 EVO 1TB – NVMe PCIe M.2 2280 SSD (MZ-V7E1T0BW) – $247.99 USD
- Patriot SCORCH M.2 2280 256GB PCI-e 3.0 x2 (PS256GPM280SSDR) – $43.99 USD
- SanDisk Ultra 3D 2.5″ 1TB SATA III 3D NAND SSD (SDSSDH3-1T00-G25) – $139.99 USD
- PNY Elite-X Fit 32GB USB 3.0 Flash Drive (P-FDI32GEXFIT-GE) – $8.99 USD
- M.2 NVMe SSD NGFF to PCIE 3.0 X16 /X4 Adapter M Key – $13.99 USD
GRAND TOTAL = $2,939.95 USD
Very happy with my purchase for just under $3k. A lot of bang for my buck! The 128GB is memory is on the low side for what it can support but I stayed within my budget and didn’t break the bank. I can always sell these modules and put those funds to get me to 256GB or 512GB if I want. I didn’t need to purchase any networking gear as I purchased and installed that when I moved this past September. Just have to say one thing…Ubiquiti Networks is awesome for home networking!
Single Node vSAN
If you follow me on Twitter (@vcdx245), you may have noticed something changed from my original BOM which I shared a picture of. I originally purchased a Crucial 1TB NVMe SSD drive because I found a great deal on it for only $169.99. Unfortunately it would not work in this specific server. I’m sure Micron and VMware engineering are working on getting the NVMe specs resolved for these drives in the future. But as of right now they do not work. Thanks to the awesomeness of Amazon Prime I returned it and received my new Samsung 1TB NVMe drive quickly. It worked immediately with no issues whatsoever.
After brainstorming a few ideas on how I wanted to set things up I narrowed it down to two ways that I could set this server up. The obvious one is a single-node VMware vSAN system so let’s cover that setup first.
The two (2) NVMe drives are what I want to use for vSAN; use the smaller 256GB NVMe drive as the cache device and the 1TB NVMe for my capacity device. The SanDisk 2.5″ SSD drive I set up as a local 1TB VMFS datastore for storing my ISO images, OVA’s and whatever else I may want on there. I like to call this my “flex space” on my system.
I installed ESXi 6.7 U1 onto the host, configured some base settings including static IP, NTP and applied updates from ‘esxcli’ locally (command below). I used the USB 3.0 PNY 32GB low-profile drive for my ESXi installation. Nice cheap option and easy to use. Could I have purchased a SATADOM drive for the SATADOM port? Certainly. I didn’t simply to keep cost down. Maybe I’ll use one in the future!
esxcli software vib update -d "https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml"
It may take a little time for the updates to download and install depending on your Internet speed. My system update was complete in less than 10 minutes (including reboot). The command I used to update my lab server is above.
Once ESXi was installed and updated I checked my storage devices. There you can see my USB boot drive, my two local NVMe drives and my SATA drive. Three SSDs in one system!
After I was patched up and ready to go I loaded up my VCSA 6.7 U1 installer and deployed vCenter Server using the “Install on a new vSAN cluster containing the target host” option during the installation (below).
I then selected my NVMe drives for vSAN. The 256GB NVMe drive is set for ‘Cache tier’ and the 1TB NVMe drive is set for my ‘Capacity tier’ (below). I selected ‘Enable Thin Disk Mode’ despite my graphic missing that tick in the check box.
That’s pretty much all there is to it. I completed the deployment of my vCenter Server 6.7 appliance w/ embedded PSC and I am up and running! I have my vSAN Datastore along with a 1TB VMFS “flex space” for storing all of my VMs. Next step for me will be deploying a handful of nested ESXi hosts, NSX, vROps, vRA and so on.
How am I going to run all of that with only 128GB of memory? The virtual appliances for vCenter, NSX Manager, vROps and so on all have some large memory requirements when installed but this is a lab so I will setting the memory much lower for each. For obvious reasons you wouldn’t do this in a production environment. This is a lab and I want to squeeeeeeeze as much as I can onto this system which means using a few tweaks.
Alternative Lab Setup
I always like exploring other options and ideas. It’s the architect mindset that I have. That being said, what if I didn’t want to set this server up as a single-node vSAN? How would I do this if my plan is to deploy a nested ESXi cluster that would run vSAN, NSX among many other things in this very same system?
I start with the three (3) SSD drives installed. How do I want to consume this storage?
- NVMe #1: 256 GB
- NVMe #2: 1 TB
- SATA Drive: 1 TB
Set each of the drives as a local VMFS datastore (see below). Let’s say my plan is setting up nested ESXi and run vSAN on those virtualized hosts. I’m going to use the SATA datastore as my boot ESXi boot drive and then use the vSAN cache and capacity datastore for those exact purposes as I have labeled them.
Next let’s take a look at how I would configure the VMDK’s for my nested ESXi hosts. I create a new VM for my first nested ESXi guest and use the following settings for my three (3) VMDKs.
- ESXi Boot Drive = 15 GB on the SATA VMFS datastore
- vSAN Cache Drive = 20 GB on the vSAN Cache VMFS datastore
- vSAN Capacity Drive = 100 GB on the vSAN Capacity VMFS datastore
When I complete the wizard for creating my VM on my standalone host, the summary page will look something like this. Notice the boot VMDK is thick, lazily Zeroed whereas the two vSAN VMDKs are both thick, eagerly zeroed.
I install ESXi on my nested VM and complete the setup; connect directly to my nested ESXi host and take a look at the virtual hardware, storage adapters and the devices. You will see the Paravirtual SCSI controller (PVSCSI) and then the 3 VMDKs.
Simply proceed with building out the remaining nested ESXi VMs for your vSAN cluster (3 minimum, the more the merrier) and just like that you’ll have a nested AF vSAN cluster. The only thing you will need to do on the hosts prior to setting up vSAN is SSH to each nested ESXi guest and set the capacity flag for vSAN. You can read how to do that from one of my previous blog articles…All-Flash vSAN 6.5 on Nested ESXi. Even though the blog outlines vSAN 6.5 the procedure is still the same on 6.7.
Have to say I am very pleased with my new lab server. I have an old lab server that I am going to continue to use as well. Most likely set up some out-of-band management VMs or something else on there so I can really load up the VMware SDDC stack on this new system. We’ll see where things go. I really enjoy setting things up, kick tires on things, then tear it down and do something else. So I highly doubt whatever I will have on this system will be static by any means. The good news is I’m back to blogging. It’s been impossible to blog without a lab server for me because I love the hands on experience. I had a dedicated system to use at my previous employer and my old home lab didn’t have enough resources to do what I want so that’s why I needed to re-invest in something new.
When I joined VMware in May 2018 I also planned on relocating. I didn’t have to relocate very far but the move would be not only beneficial to supporting my customers but more importantly the move was better for my family. It was a good decision that has benefited us in the short term and will be better over the long term. I have missed blogging and labbing over the past 6+ months (geez I can’t believe it’s been that long) but I’m happy to be back!
Remember home labs are great for so many reasons. I encourage you to invest in one because that investment will enhance your skill-set and your career. In some cases, depending on where you live, it’s a tax write off for career development. So save those receipts!
UPDATE – July 20, 2020
I updated my lab! I replaced the Patriot SCORCH 256GB NVMe M.2 device with a Western Digital Blue SN550 NVMe 1TB drive. I found a deal online that I could not pass up. I now have 3TB of NVMe capacity in my lab server!
My next goal is to upgrade from 128GB to 256GB of memory!