3D Printed RPI Rack

PI Rack Mounted in the Rack

I printed out the Raspberry Pi Blade Center found on thingiverse on my Ultimaker 2. This was a relatively easy set of units to print, with a majority of the time needed on it to be spent cleaning up the large number of pi holders. I assembled these with threaded rods and lock nuts. The threaded rods had to be cut to size but in the end, everything was assembled in about a weekend after printing was done. In order to fit RPI 3B+ with the POE hat, I did end up hand modifying some trays, which I don’t have a model for the changes so they can be printed in that form.

Before trimming the threaded rods

In the end, I didn’t use the wiring clips that are modeled into the blade center since all I am running in it right now are POE powered PIs, the rest of my lab PIs being elsewhere. In the end I printed the primary components out of PLA while some of the trays are ABS since I wanted a few different colors for those as a way to determine which Pis are for what services. Currently the Pis are running a IPv6 tunnel and some docker service experiments.

Home Built Server Rack

Finished Product

In order to organize my lab a bit better, I decided to custom build a server rack. This was to be the same height as my desk and also be usable as more work surface area. I determined that a 15u rack would be the the best size, and it would give me some room to grow as well since i only currently have a few things that can fit in the rack (The Fractal Design Define XL R2 is far too big to fit, so its just the vhost, networking equipment and some RPIs). First off, the design, which ended up being slightly incorrect on sizing of one of the components.

The Schematics used

I didn’t take into account top panel of the rack, so I ended up changing some of the board layouts and everything worked since all boards are the same thickness. This is built out of cabinet board for the sides and top, and some 1×4’s internally to add reinforcement by the rails.

Assembling the sides

Sides were assembled first, then in order to make absolutely certain i didn’t size things wrong, the rest was assembled with rack shelves and a 10/100 switch holding the sides at the right distance apart. I used a large number of wood screws to hold the rail onto the sides, and wood glue between the side panel and 1×4’s. This provides a bit of a gap between the servers and the sidepanels which wiring can run through, or lighting (I’ve considered adding some led lighting for when I have to work on the hardware in there).

Once everything was assembled, it went to sanding and finishing. I used danish oil as it should be a solid finish that lasts a long time, and looks pretty good.

Sanding
Oiling
First Layout, UPS in the rack

The first layout had the UPS in the rack, but I decided against this later as it took up a majority of the space, which I would rather take up with hardware. The current hardware in it includes my Vhost, 2 RPI 3B+ with POE hats, a Unifi L2 POE switch and a USG Pro. The network gear is back mounted, which is a massive pain to manage moving hardware about, but its easy enough to get to for connecting devices to the network.

VHost Build

I started looking at servers for a new vhost for a few months, trying to determine what I would need, and what I could utilize. I was running out of RAM in my NAS to run virtual machines, and I was running into odd times when I wanted to reboot my NAS for updates, but didn’t want to reboot my entire lab. So I came to the decision to build a new server that would host the virtual machines that were less reliant on the NAS (IRC bots, websites, wiki pages, etc) as well as other more RAM intensive systems.¬† The new server was to have a large amount of RAM (>32GB maximum), which would give me plenty of room to play. I plan on leaving some services running on the NAS which has plenty of resources to handle a few extra duties like plex, and backups. The new VHost was also to be all flash, the NAS uses a WD black drive as the virtual machine host drive, which ends up slowing down when running updates on multiple machines. I may also in the future upgrade the NAS to a SSD for its cache drive and VM drive.

Here is what I purchased:

System Dell T5500
CPU 2x Intel Xeon E5-2620
PSU Dell 875W
RAM 64GB ECC
SSD (OS) Silicon Power 60GB
SSD (VM) Samsung 850 EVO 500GB
NIC Mellanox Connect-x2 10G SFP+

The tower workstation was chosen as it was a quiet system, supporting ECC ram and a fair bit of expandability. The 10G NIC is hooked up as a PTP link to the NAS. The motherboards 1G NIC is setup as as simple connection to the rest of my LAN. The VM SSD is formatted as EXT4 and added to proxmox as a folder. All VMs are going to use qcow2 type files, as they support thin provisioning and snapshots. I considered LVM-Thin type of storage, however this seemed to add complexity without any feature/performance gains.

NAS Build

I started my original NAS build with inexpensive quality consumer components, but by now its become a strange chimera of enterprise and consumer gear. The main goals: low power, quiet, high storage density

With the focus, the main decision was on a case, 8 hdd’s were the minimum number of bays, and having a few 5.25″ bays allowed me to use a 5×3 cage to add more hdd bays. From some research, it can also be found that another stack of hdd cages can be added to the case with relative ease, bringing the total number of disks held to ~21.

Case Fractal Design Define XL R2
CPU Intel Xeon E3-1245 v2
Motherboard Asrock Pro-4m
PSU Antec BP550
RAM Gskill Ripjaws X (32GB total)
HBA LSI 9201-16i
HDD Various
NIC Intel Pro/1000 VT, Chelsio dual port 10G SFP+
Extras Norco 5 x 3.5″ HDD Cage

The HDD list is a bit eclectic, i have used whatever is on sale and cheap.

Brand Size Model
WD 2TB Green
WD 2TB Green
WD 2TB Green
Seagate 3TB Barracuda
WD 3TB Red
WD 3TB Green
Seagate 4TB Barracuda
HGST 6TB NAS
Toshiba 6TB x300
WD 500GB Black

The NAS is used to host a couple of virtual machines and docker containers and it runs¬†unRAID for the operating system. The motherboard for the system as picked for a low price and high expandability. The mATX board has 3 PCIE x16 slots, which will easily handle hba’s, 10Gb networking, etc.

The 5×3 cage added a large bit of storage density to the case. I replaced the original fan with a noctua to reduce noise.

Heres a shot of the 5×3 from the outside. It is not a hot swap capable unit, and also has no backplane. This reduces some of the complexity and potential failure points in the unit. I also don’t need any of this, so this 5×3 cage costed very little. I drilled new mounting holes in the case though to set it back about a half an inch. This allows the fan to pull in from the slots on the side of the case for airflow.

A bit dusty, but it keeps running. The HBA is mounted in a PCIE x16 slot.

The case is a giant fractal design define xl r2. The case is lined with sound insulation making it a nice quiet obelisk.