VHost Build

I started looking at servers for a new vhost for a few months, trying to determine what I would need, and what I could utilize. I was running out of RAM in my NAS to run virtual machines, and I was running into odd times when I wanted to reboot my NAS for updates, but didn’t want to reboot my entire lab. So I came to the decision to build a new server that would host the virtual machines that were less reliant on the NAS (IRC bots, websites, wiki pages, etc) as well as other more RAM intensive systems.¬† The new server was to have a large amount of RAM (>32GB maximum), which would give me plenty of room to play. I plan on leaving some services running on the NAS which has plenty of resources to handle a few extra duties like plex, and backups. The new VHost was also to be all flash, the NAS uses a WD black drive as the virtual machine host drive, which ends up slowing down when running updates on multiple machines. I may also in the future upgrade the NAS to a SSD for its cache drive and VM drive.

Here is what I purchased:

System Dell T5500
CPU 2x Intel Xeon E5-2620
PSU Dell 875W
SSD (OS) Silicon Power 60GB
SSD (VM) Samsung 850 EVO 500GB
NIC Mellanox Connect-x2 10G SFP+

The tower workstation was chosen as it was a quiet system, supporting ECC ram and a fair bit of expandability. The 10G NIC is hooked up as a PTP link to the NAS. The motherboards 1G NIC is setup as as simple connection to the rest of my LAN. The VM SSD is formatted as EXT4 and added to proxmox as a folder. All VMs are going to use qcow2 type files, as they support thin provisioning and snapshots. I considered LVM-Thin type of storage, however this seemed to add complexity without any feature/performance gains.

NAS Build

I started my original NAS build with inexpensive quality consumer components, but by now its become a strange chimera of enterprise and consumer gear. The main goals: low power, quiet, high storage density

With the focus, the main decision was on a case, 8 hdd’s were the minimum number of bays, and having a few 5.25″ bays allowed me to use a 5×3 cage to add more hdd bays. From some research, it can also be found that another stack of hdd cages can be added to the case with relative ease, bringing the total number of disks held to ~21.

Case Fractal Design Define XL R2
CPU Intel Xeon E3-1245 v2
Motherboard Asrock Pro-4m
PSU Antec BP550
RAM Gskill Ripjaws X (32GB total)
HBA LSI 9201-16i
HDD Various
NIC Intel Pro/1000 VT, Chelsio dual port 10G SFP+
Extras Norco 5 x 3.5″ HDD Cage

The HDD list is a bit eclectic, i have used whatever is on sale and cheap.

Brand Size Model
WD 2TB Green
WD 2TB Green
WD 2TB Green
Seagate 3TB Barracuda
WD 3TB Red
WD 3TB Green
Seagate 4TB Barracuda
Toshiba 6TB x300
WD 500GB Black

The NAS is used to host a couple of virtual machines and docker containers and it runs¬†unRAID for the operating system. The motherboard for the system as picked for a low price and high expandability. The mATX board has 3 PCIE x16 slots, which will easily handle hba’s, 10Gb networking, etc.

The 5×3 cage added a large bit of storage density to the case. I replaced the original fan with a noctua to reduce noise.

Heres a shot of the 5×3 from the outside. It is not a hot swap capable unit, and also has no backplane. This reduces some of the complexity and potential failure points in the unit. I also don’t need any of this, so this 5×3 cage costed very little. I drilled new mounting holes in the case though to set it back about a half an inch. This allows the fan to pull in from the slots on the side of the case for airflow.

A bit dusty, but it keeps running. The HBA is mounted in a PCIE x16 slot.

The case is a giant fractal design define xl r2. The case is lined with sound insulation making it a nice quiet obelisk.