Out with the old
My previous homelab, although functional was starting to hit the limits of 32GB of RAM, particularly when running vCenter, vSAN, NSX, etc concurrently.
A family member had use for my old lab so I decided to sell it and get a replacement whitebox.
- Quiet – As this would live in my office and powered on pretty much 24/7 it need a silent running machine
- Power efficient – I’d rather not rack up the electric bill.
- 64GB Ram Support
Nice to have
- IPMI / Remote Access
I’ve had a interest in the Xeon-D boards for quite some time, the low power footprint, SRV-IO support, integrated 10GbE, IPMI and 128GB RAM support make it an attractive offering. I spotted a good deal and decided to take the plunge on a Supermicro X10SDV-4C+-TLN4F
As for a complete list:
Motherboard – Supermicro X10SDV-4C+-TLN4F
RAM – 64GB (4x16GB) ADATA DDR4
Case – TBC, undecided between a supermicro 1U case or a standard desktop ITX case
Network – Existing gigabit switch. 10GbE Switches are still quite expensive, but it’s nice to have future compatibility on the motherboard for it.
I’ve yet to take delivery of all the components, part 2 will include assembly.
With the ever growing popularity of SDDC solutions I’ve decided to invest some time in learning VMware NSX and sit the VCP6-NV Exam. For this I’ve re-purposed my existing homelab and configured it for NSX. I have a fairly simple setup consisting of a single whitebox “server” that will accommodate nested ESXi hypervisors and a HP Microserver acting as a iSCSI target.
Motherboard: MSI B85M-E45 Socket 1150
CPU: Intel Core i7 4785T 35W TDP
RAM: 32GB Corsair DDR3 Vengeance
PSU: 300W be quiet! 80plus bronze
Case: Thermaltake Core v21 Micro ATX
Switch: 8 Port Netgear GS 108-T Smart Switch
Cooler: Akasa AK-CC7108EP01
NAS/SAN: HP Microserver N54L , 12GB Ram, 480GB SSD, 500GB mechanical.
ESXi is installed on the physical host with additional ESXi VM’s being created so I can play around with DRS/HA features too. The end result looks like this:
From a networking perspective I have separate port groups on my physical host for Management, VM, iSCSI, vMotion etc. My nested ESXi hosts have vNIC’s in these port groups. Due to the nature of nesting ESXi hosts for this to work promiscuous mode has to be enabled on the port groups on the phyiscal host for this to work (management doubles as VXLAN Transport)
The actual installation of NSX is already well covered but this covers the basics for what I needed to do.
SDDC solutions are becoming increasingly more popular, and although I’m probably a little biased, I would say that NSX is leading the software defined networking front. Following on from my previous post about my nested NSX homelab I sat and (thankfully passed) the VCP6-NV exam.
The official cert guide – https://www.amazon.co.uk/VCP6-NV-Official-2V0-641-Vmware-Certification/dp/0789754800/ref=sr_1_1?ie=UTF8&qid=1493819639&sr=8-1&keywords=vcp6-NV
Practice (non exam) questions – http://www.elasticsky.co.uk/practice-questions/
Blueprint – https://mylearn.vmware.com/mgrReg/plan.cfm?plan=95141&ui=www_cert
Pluralsight NSX videos
Lots of time
This was a particularly tough exam. As I would describe myself as somewhat inexperienced in NSX compared to vSphere, I found the exam challenging. I’ve noticed that VMware have recently revised the exam reducing the number of questions down to 77 from 85, which is nice.