My New Homelab!

I finally have a new home lab server!

Getting a new lab server was so long overdue. I’m so happy it is here so I can start BLOGGING again. I’ve missed it over the past few months since joining VMware last May. I’ve been super busy with onboarding and getting acclimated with my customers that all this took a back seat for a little while. Everything at VMware is going better than expected and now I finally have my hands on this!

We all know there are a lot of “home labbers” out there in the tech world. Every home labber goes through the same thing at some point. Where do I start? What should I use? What’s recommended? What should I avoid? That list of questions can go on forever but at some point you need to just pull the trigger. Pick something and roll with it.

I’m the type who likes to keep things simple. There are a lot of home lab builds that I have seen online that would rival some nuclear missile silos or something HAHA…I don’t need that for what I want to do. But if that’s the type of lab you want I say go for it, go crazy and most importantly have fun with it. I enjoy seeing the labs and all the creative ways people do things. I’m a huge fan!

My Bill of Materials

Let’s cover something really quickly before we get into the bill of materials. Why the SuperMicro E300-9D SuperServer? I’ve been in the market for a new lab server for a few months now (after moving in September). There were two main things I wanted to it have…it had to be cost efficient and dense in terms of resources (CPU, memory, disk, etc.). I want to be able to do a lot with it without breaking the bank! Fast forward a few months and I came across William Lam’s blog article on the Supermicro E300-9D. Awesome write up and after reading it a few times I was sold on the E300-9D. This was the lab server I had to get my hands on.

  • It was cost efficient for labbing and small form factor.
  • It can be very dense…up to 512GB of memory can fit in this server!
  • I can load up on some very fast flash storage…NVMe!
  • Come’s with a lot of onboard networking including 10GbE if you buy the SFP+ modules.
  • It’s on the VMware HCL!!!

Very dense and I can expand my resources in the future and add things that I don’t necessarily want or need out of the gate. Scaling resources is always nice isn’t it?

So what did I settle on and how much did it set me back? Here is my final Bill of Materials! The prices listed below is what I acquired them for. Good chance these prices are different on the links provided. I also used the PCI-e NVMe adapter card in my server to accommodate the additional storage.

GRAND TOTAL = $2,939.95 USD

Very happy with my purchase for just under $3k. A lot of bang for my buck! The 128GB is memory is on the low side for what it can support but I stayed within my budget and didn’t break the bank. I can always sell these modules and put those funds to get me to 256GB or 512GB if I want. I didn’t need to purchase any networking gear as I purchased  and installed that when I moved this past September. Just have to say one thing…Ubiquiti Networks is awesome for home networking!

This slideshow requires JavaScript.

Single Node vSAN

If you follow me on Twitter (@vcdx245), you may have noticed something changed from my original BOM which I shared a picture of. I originally purchased a Crucial 1TB NVMe SSD drive because I found a great deal on it for only $169.99. Unfortunately it would not work in this specific server. I’m sure Micron and VMware engineering are working on getting the NVMe specs resolved for these drives in the future. But as of right now they do not work. Thanks to the awesomeness of Amazon Prime I returned it and received my new Samsung 1TB NVMe drive quickly. It worked immediately with no issues whatsoever.

After brainstorming a few ideas on how I wanted to set things up I narrowed it down to two ways that I could set this server up. The obvious one is a single-node VMware vSAN system so let’s cover that setup first.

The two (2) NVMe drives are what I want to use for vSAN; use the smaller 256GB NVMe drive as the cache device and the 1TB NVMe for my capacity device. The SanDisk 2.5″ SSD drive I set up as a local 1TB VMFS datastore for storing my ISO images, OVA’s and whatever else I may want on there. I like to call this my “flex space” on my system.

I installed ESXi 6.7 U1 onto the host, configured some base settings including static IP, NTP and applied updates from ‘esxcli’ locally (command below). I used the USB 3.0 PNY 32GB low-profile drive for my ESXi installation. Nice cheap option and easy to use. Could I have purchased a SATADOM drive for the SATADOM port? Certainly. I didn’t simply to keep cost down. Maybe I’ll use one in the future!

esxcli software vib update -d "https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml"

It may take a little time for the updates to download and install depending on your Internet speed. My system update was complete in less than 10 minutes (including reboot). The command I used to update my lab server is above.

Once ESXi was installed and updated I checked my storage devices. There you can see my USB boot drive, my two local NVMe drives and my SATA drive. Three SSDs in one system!
02 - Local Storage.jpg

After I was patched up and ready to go I loaded up my VCSA 6.7 U1 installer and deployed vCenter Server using the “Install on a new vSAN cluster containing the target host” option during the installation (below).

01 - vsan bootstrap

I then selected my NVMe drives for vSAN. The 256GB NVMe drive is set for ‘Cache tier’ and the 1TB NVMe drive is set for my ‘Capacity tier’ (below). I selected ‘Enable Thin Disk Mode’ despite my graphic missing that tick in the check box.

02 - vSAN Disks.jpg

That’s pretty much all there is to it. I completed the deployment of my vCenter Server 6.7 appliance w/ embedded PSC and I am up and running! I have my vSAN Datastore along with a 1TB VMFS “flex space” for storing all of my VMs. Next step for me will be deploying a handful of nested ESXi hosts, NSX, vROps, vRA and so on.

How am I going to run all of that with only 128GB of memory? The virtual appliances for vCenter, NSX Manager, vROps and so on all have some large memory requirements when installed but this is a lab so I will setting the memory much lower for each. For obvious reasons you wouldn’t do this in a production environment. This is a lab and I want to squeeeeeeeze as much as I can onto this system which means using a few tweaks.

Alternative Lab Setup

I always like exploring other options and ideas. It’s the architect mindset that I have. That being said, what if I didn’t want to set this server up as a single-node vSAN? How would I do this if my plan is to deploy a nested ESXi cluster that would run vSAN, NSX among many other things in this very same system?

I start with the three (3) SSD drives installed. How do I want to consume this storage?

  1. NVMe #1: 256 GB
  2. NVMe #2: 1 TB
  3. SATA Drive: 1 TB

Set each of the drives as a local VMFS datastore (see below). Let’s say my plan is setting up nested ESXi and run vSAN on those virtualized hosts. I’m going to use the SATA datastore as my boot ESXi boot drive and then use the vSAN cache and capacity datastore for those exact purposes as I have labeled them.

03 - local vmfs option

Next let’s take a look at how I would configure the VMDK’s for my nested ESXi hosts. I create a new VM for my first nested ESXi guest and use the following settings for my three (3) VMDKs.

  • ESXi Boot Drive = 15 GB on the SATA VMFS datastore
  • vSAN Cache Drive = 20 GB on the vSAN Cache VMFS datastore
  • vSAN Capacity Drive = 100 GB on the vSAN Capacity VMFS datastore

When I complete the wizard for creating my VM on my standalone host, the summary page will look something like this. Notice the boot VMDK is thick, lazily Zeroed whereas the two vSAN VMDKs are both thick, eagerly zeroed.

04 - Nested ESXi VMDKs.jpg

I install ESXi on my nested VM and complete the setup; connect directly to my nested ESXi host and take a look at the virtual hardware, storage adapters and the devices. You will see the Paravirtual SCSI controller (PVSCSI) and then the 3 VMDKs.

This slideshow requires JavaScript.

Simply proceed with building out the remaining nested ESXi VMs for your vSAN cluster (3 minimum, the more the merrier) and just like that you’ll have a nested AF vSAN cluster. The only thing you will need to do on the hosts prior to setting up vSAN is SSH to each nested ESXi guest and set the capacity flag for vSAN. You can read how to do that from one of my previous blog articles…All-Flash vSAN 6.5 on Nested ESXi. Even though the blog outlines vSAN 6.5 the procedure is still the same on 6.7.

Summary

Have to say I am very pleased with my new lab server. I have an old lab server that I am going to continue to use as well. Most likely set up some out-of-band management VMs or something else on there so I can really load up the VMware SDDC stack on this new system. We’ll see where things go. I really enjoy setting things up, kick tires on things, then tear it down and do something else. So I highly doubt whatever I will have on this system will be static by any means. The good news is I’m back to blogging. It’s been impossible to blog without a lab server for me because I love the hands on experience. I had a dedicated system to use at my previous employer and my old home lab didn’t have enough resources to do what I want so that’s why I needed to re-invest in something new.

When I joined VMware in May 2018 I also planned on relocating. I didn’t have to relocate very far but the move would be not only beneficial to supporting my customers but more importantly the move was better for my family. It was a good decision that has benefited us in the short term and will be better over the long term. I have missed blogging and labbing over the past 6+ months (geez I can’t believe it’s been that long) but I’m happy to be back!

Remember home labs are great for so many reasons. I encourage you to invest in one because that investment will enhance your skill-set and your career. In some cases, depending on where you live, it’s a tax write off for career development. So save those receipts!

UPDATE – July 20, 2020

I updated my lab! I replaced the Patriot SCORCH 256GB NVMe M.2 device with a Western Digital Blue SN550 NVMe 1TB drive. I found a deal online that I could not pass up. I now have 3TB of NVMe capacity in my lab server!

Update-07-20_SSD-Drive

My next goal is to upgrade from 128GB to 256GB of memory!

Advertisement

18 thoughts on “My New Homelab!

  1. That Supermicro server is exactly what I’ve had my eyes on too! I was thinking about purchasing 3 for a true VSAN setup, but maybe going single node is the way to go for a home lab. If vCenter is running on that single node setup, how do you update ESXi (patches), and VSAN in the future if you’ll obviously need to shut down all vm’s.
    On another note, how much power does your setup use in kWatt hrs and is it costly?

    Like

  2. Hi – Awesome lab! Im consider to buy a similar single server. One question, what physical network setup do you have in your lab environment?

    Like

      1. Thank you. Regarding the server, how is the noise if youre working in the same room? I have red some replaces the fans to get it more silent.

        Like

      2. They’re loud when the system first fires up and then they’re pretty quiet. Low hum noise. It can certainly ramp up as you load more workloads. I had close to 30 running at one time and the fans started to ramp up a little. I have it in my basement where it’s very cool and dry so it’s not an issue for me.

        Liked by 1 person

  3. Im looking to buy a homelab myself, and have considered to go for a single e300-9d server and run a powerful nested lab like yours. Is there other thing to be aware of or needed in a nested lab like this that is not mentioned in this post? My other alternative was to buy 2x E200-8d or e300-8d and run a 2-node vsan cluster, but I figure this would in the end cost more with all the extra stuff needed.

    Like

    1. This specific Supermicro server is quite powerful and very flexible in terms of usage. Even with the 128GB of memory I can load it up and get great performance from it, mostly due to the amount of NVMe storage I have. I’ve stood up a nested 6-node All Flash vSAN cluster on this single server without any problems and it worked great! I’m constantly building different configurations and breaking them down, rarely is anything very static on my system. That’s just how I operate. Just make sure it is stored in a cool place with ample airflow. It got pretty warm once on me when I really loaded it up once. Not hot but certainly warm.

      Liked by 1 person

      1. Ok, thank you. How is your physical network setup in terms of this server only? As you run nested, what is actually needed on the physical network side? Thanks.

        Like

      2. Ok, thank you. How is your physical network setup in terms of this server only? As you run nested, what is actually needed on the physical network side? Thanks.

        Like

  4. I would have liked a post on how you have setup your physical host and configured networking in pfsense and in the nested hosts…. That would have been awesome! 🙂 Cheers! Bjørn-Tore

    Like

    1. Will make note of that and work on a future blog article. I actually abandoned pfSense recently due to some strange networking behavior and replaced it with VyOS. As soon as I started using VyOS my issues disappeared.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s