I printed out the Raspberry Pi Blade Center found on thingiverse on my Ultimaker 2. This was a relatively easy set of units to print, with a majority of the time needed on it to be spent cleaning up the large number of pi holders. I assembled these with threaded rods and lock nuts. The threaded rods had to be cut to size but in the end, everything was assembled in about a weekend after printing was done. In order to fit RPI 3B+ with the POE hat, I did end up hand modifying some trays, which I don’t have a model for the changes so they can be printed in that form.
In the end, I didn’t use the wiring clips that are modeled into the blade center since all I am running in it right now are POE powered PIs, the rest of my lab PIs being elsewhere. In the end I printed the primary components out of PLA while some of the trays are ABS since I wanted a few different colors for those as a way to determine which Pis are for what services. Currently the Pis are running a IPv6 tunnel and some docker service experiments.
In order to organize my lab a bit better, I decided to custom build a server rack. This was to be the same height as my desk and also be usable as more work surface area. I determined that a 15u rack would be the the best size, and it would give me some room to grow as well since i only currently have a few things that can fit in the rack (The Fractal Design Define XL R2 is far too big to fit, so its just the vhost, networking equipment and some RPIs). First off, the design, which ended up being slightly incorrect on sizing of one of the components.
I didn’t take into account top panel of the rack, so I ended up changing some of the board layouts and everything worked since all boards are the same thickness. This is built out of cabinet board for the sides and top, and some 1×4’s internally to add reinforcement by the rails.
Sides were assembled first, then in order to make absolutely certain i didn’t size things wrong, the rest was assembled with rack shelves and a 10/100 switch holding the sides at the right distance apart. I used a large number of wood screws to hold the rail onto the sides, and wood glue between the side panel and 1×4’s. This provides a bit of a gap between the servers and the sidepanels which wiring can run through, or lighting (I’ve considered adding some led lighting for when I have to work on the hardware in there).
Once everything was assembled, it went to sanding and finishing. I used danish oil as it should be a solid finish that lasts a long time, and looks pretty good.
The first layout had the UPS in the rack, but I decided against this later as it took up a majority of the space, which I would rather take up with hardware. The current hardware in it includes my Vhost, 2 RPI 3B+ with POE hats, a Unifi L2 POE switch and a USG Pro. The network gear is back mounted, which is a massive pain to manage moving hardware about, but its easy enough to get to for connecting devices to the network.
The goal of the DNS structure of my lab was primarily to create a very stable foundation. Second to that, was the addition of two services, the first a local DNS server to avoid loopback issues with my ISP, and the second was pihole ad blocking.
To set this up, UCS was chosen as a domain controller/DNS server over FreeIPA. Linux was chosen as the platform of choice as that is what a majority of my systems are, and I don’t have any windows server licenses. UCS was installed to a VM, and a second ubuntu VM was configured with PiHole. These were configured to handle local queries first, then everything else. If one of my local DNS servers is down, the clients won’t notice a change as everything uses Google DNS as backup.
3D printers have come a long ways in the past few years. The prices have plummeted for basic units, allowing anyone to buy them. The raspberry pi can be setup alongside a basic 3D printer to enable some amazing functionality. They can allow remote control, management, and monitoring of the printers. When combined with a pi cam, you can even create time lapses of the prints. To do this, we will be running OctoPi on the raspberry pi.
OctoPi is a Raspberry Pi distribution for 3d printers. Out of the box it includes:
theOctoPrint host software including all its dependencies and preconfigured with webcam and slicing support,
mjpg-streamer for live viewing of prints and timelapse video creation with support for USB webcams and the Raspberry Pi camera,
Hook the Pi up to a monitor and a network connection and power it up.
Once the Pi is done booting, it will show a login prompt, above this prompt is network information about the pi, including the IP address, in this case it is 192.168.1.26. This will be needed to access the WebUI of OctoPrint.
Navigate your web browser to the IP address shown.
Access control will be the first thing to setup in OctoPrint. This can be done by filling out your username and password in the dialog that shows up. If access control is not desired, then hit “Disable Access Control”, otherwise click on “Keep Access Control Enabled”. For this example, access control will remain enabled, in order to prevent unauthorized users from running the 3d printer.
After that dialog, the primary control for OctoPrint will display. This will not be logged in however.
Clicking login and typing in your username and password previously setup will get you the full range of configuration settings for OctoPrint.
After logging in, you will see the successful message, and if updates are available, a message will display stating that as well. We will install updates before continuing in order to have the most up to date software available.
Click through the update dialogs.
After proceeding, there will be a flag in the corner stating that it is updating, and the update will be run by the Pi.
The web browser will attempt to reconnect periodically as the server reboots. Once it has completed rebooting, you will get a prompt to reload the page.
Clicking reload now will bring up any changes that should be addressed. For this version of OctoPrint, there are new settings for cura. These aren’t going to be covered in this tutorial, clicking “Finish” will close the dialog.
We are now at the point to start working with the printer and its connection to OctoPrint.
The first step is to determine the connection properties needed for your printer. For this example, we will use an Ultimaker 2. There should only be one serial port available. The baudrate should be supplied by the manufacturer, if not, it can either be looked up online, or simply by guessing. The printer profile will be the default profile. If the connection is successful, the settings can be saved and set to auto connect on OctoPi’s startup.
Once successfully connected, you should start getting data back from the printer. The temperature graph will update with hot end temperature, and bed temperature (if available).
The control tab will allow you to manually control the printer, moving the head in all directions, along with extruding the filament.
Further settings can be found in the settings menu in OctoPrint, which has the wrench icon in the top bar of the GUI.
GCode files for printing can be uploaded to the files menu in Octoprint
Raspberry Pis are neat little computers that can be placed just about anywhere assuming there is power and network connectivity nearby. These were made even more convenient with the addition of built in Wi-Fi on the Raspberry Pi 3. One application of these small devices is for home security, as a small motion sensing webcam that can record 24/7, or only when there is motion detected.
There are ways to build up your own system using the basic raspbian distribution and various software packages, or you can use a custom built operating system for this purpose, MotionEyeOS.
MotionEyeOS has everything needed to run a security camera system, or simply a remote webcam monitoring system. This will cover setting up a camera in this tutorial for basic recording and monitoring. This will let us spy on our dog while away at work.
You will need a Raspberry Pi
Compatible Webcam or Pi Cam
SD or Micro SD card for the Pi (larger is better if you plan on storing the images/videos on it)
Click the folder icon and navigate to your disk MotionEyeOS disk image
Ensure the Device listed is the SD card you want to format with the image, in this case, it is the G:\ Drive
Finally, hit the write button, and the image will be written to the SD card, which is all that’s needed to install MotionEyeOS.
Hook the Pi up to a monitor and a network connection and power it up.
Once the Pi is done booting, it will show a login prompt, above this prompt is network information about the pi, including the IP address, in this case it is 192.168.1.26. This will be needed to access the WebUI of MotionEyeOS.
After finding the IP address, navigate to it in your web browser of choice.
To login, enter the username, “admin” and hit Login. There is no password by default.
Click on the menu button in the top left corner to pull up the options
Here you will be able to change the admin username and password, as well as the surveillance username and password. The surveillance user is a basic user who can view the webcams but not change settings.
To add a camera, plug in a USB webcam or pi cam and hit the dropdown next to the menu button. From here you can select cameras that have been already setup, or you can add a camera.
The webcam should show up as a camera, if it does not, you may need to install drivers for it.
After adding the webcam, you can navigate to it and its settings by going through the same dropdown
Once on the camera page, you can access a vast amount of settings. These let you configure working hours, motion sensing, save data locations for any videos or images, and more. If more advanced settings are needed, there is an option under General Settings to enable the advanced settings. Once settings have been modified, click the apply button in the top right corner of the settings pane to enable them.
Below are screenshots of the system in action
By default, SSH and FTP are enabled on MotionEyeOS, so if you want to add services or install drivers to it, you can do so remotely. This also allows you to move any security footage off the pi if it is being used to store the footage.
The network in the house has never really sat well with me, starting with the pre-installed phone splitter to the gaping hole the contractors left in the wall for the cables. This then comes to a summit since there were NO outlets near the Ethernet for the entire house.
The first revision of my network started with a single 8 Port switch on a shelf with an extension cord running under the door. This was sub-optimal due to a few reasons: first, there are 10 ports to connect, and this was an 8 Port switch, easy to fix with a bigger switch, but second and more importantly, the power was from an extension cable going out of the closet. This was an ugly setup and needed to be updated.
The power cord went to the wall wart for the switch attached onto an extension cord that went outside the closet into a bathroom in order to actually get power for the network.
I removed the need for a power cable by swapping out to a POE setup. My office ran a POE switch, and the switch in the closet was swapped to one that can be powered by POE. The hardware for this is Unifi switches, the USW8-150W for power, and two USW8’s to be powered.
The POE switch sits in my office next to the rest of the core network gear (modem, router, ap).
The first revision of the POE powered switches came with a bit of… extra cable, once the proof of concept was working, I purchased some new ethernet cords and set it up for good.
I setup link aggregation between the switches in order to improve bandwidth between the two segments of the network. This was all simple using the Unifi controller.
I started looking at servers for a new vhost for a few months, trying to determine what I would need, and what I could utilize. I was running out of RAM in my NAS to run virtual machines, and I was running into odd times when I wanted to reboot my NAS for updates, but didn’t want to reboot my entire lab. So I came to the decision to build a new server that would host the virtual machines that were less reliant on the NAS (IRC bots, websites, wiki pages, etc) as well as other more RAM intensive systems. The new server was to have a large amount of RAM (>32GB maximum), which would give me plenty of room to play. I plan on leaving some services running on the NAS which has plenty of resources to handle a few extra duties like plex, and backups. The new VHost was also to be all flash, the NAS uses a WD black drive as the virtual machine host drive, which ends up slowing down when running updates on multiple machines. I may also in the future upgrade the NAS to a SSD for its cache drive and VM drive.
Here is what I purchased:
2x Intel Xeon E5-2620
Silicon Power 60GB
Samsung 850 EVO 500GB
Mellanox Connect-x2 10G SFP+
The tower workstation was chosen as it was a quiet system, supporting ECC ram and a fair bit of expandability. The 10G NIC is hooked up as a PTP link to the NAS. The motherboards 1G NIC is setup as as simple connection to the rest of my LAN. The VM SSD is formatted as EXT4 and added to proxmox as a folder. All VMs are going to use qcow2 type files, as they support thin provisioning and snapshots. I considered LVM-Thin type of storage, however this seemed to add complexity without any feature/performance gains.
I have been fighting failing parity checks for a few months now on my unraid server. I looked into each disk, checked smart stats and even thought I had found the culprit hard drive that was causing the issues. I still had it in my array but with no data on it just in case. This all happened just before another set of problems arose. The VMs on my server started acting up, crashing, and eventually when logging into one VM, everything crashed due to memory problems. I ran memtest and discovered that one of my RAM sticks was at issue, and from there determined that it simply wasn’t seated properly. After reseating the RAM, everything started working properly again. Parity checks come back clean, no more kernel panics, and the VMs are running stably. One partially unseated RAM stick caused all those issues.
I was originally excited when docker was going to be included in the next release of unraid, the concept behind it was solid and sounded like it would make management of my server easier. This was the case for months before docker started acting up. Now I’ve been working on a way to remove any need of docker on my NAS, moving it to a VM or another server due to its instabilities. Issues I’ve run into include it not being able to stop running containers, start stopped containers, create new containers, and preventing Linux from shutting down. I could live with all of the above except the shutdown bug. It doesn’t just prevent shutdown from running, but it prevents the kernel from shutting down at all, and well after the user shells are all offline, so there’s no way to manually kill docker to allow the system to shut down safely. This is exceptionally frustrating and has caused unclean shutdowns when I’ve lost power and even when I’m just doing maintenance, since the only way to restart when docker does this is to do a hard reset. I’m not giving up hope on containers, just going to be a bit more careful around docker, they seem to advertise quite well compared to issues people have had withtheirsoftware.
I started my original NAS build with inexpensive quality consumer components, but by now its become a strange chimera of enterprise and consumer gear. The main goals: low power, quiet, high storage density
With the focus, the main decision was on a case, 8 hdd’s were the minimum number of bays, and having a few 5.25″ bays allowed me to use a 5×3 cage to add more hdd bays. From some research, it can also be found that another stack of hdd cages can be added to the case with relative ease, bringing the total number of disks held to ~21.
Fractal Design Define XL R2
Intel Xeon E3-1245 v2
Gskill Ripjaws X (32GB total)
Intel Pro/1000 VT, Chelsio dual port 10G SFP+
Norco 5 x 3.5″ HDD Cage
The HDD list is a bit eclectic, i have used whatever is on sale and cheap.
The NAS is used to host a couple of virtual machines and docker containers and it runs unRAID for the operating system. The motherboard for the system as picked for a low price and high expandability. The mATX board has 3 PCIE x16 slots, which will easily handle hba’s, 10Gb networking, etc.
The 5×3 cage added a large bit of storage density to the case. I replaced the original fan with a noctua to reduce noise.
Heres a shot of the 5×3 from the outside. It is not a hot swap capable unit, and also has no backplane. This reduces some of the complexity and potential failure points in the unit. I also don’t need any of this, so this 5×3 cage costed very little. I drilled new mounting holes in the case though to set it back about a half an inch. This allows the fan to pull in from the slots on the side of the case for airflow.
A bit dusty, but it keeps running. The HBA is mounted in a PCIE x16 slot.
The case is a giant fractal design define xl r2. The case is lined with sound insulation making it a nice quiet obelisk.