POE powered Home Network Core

I recently purchased a house with ethernet wired to every room. This was great, super easy to setup a robust network. Except for two problems, there was a Legrand TM1045 setup at the center of it, a phone line splitter. This would be absolutely useless for an actual network, and the second problem was that there is no power where the ethernet switch would go. I would have to drop power into the room, use an extension cord, or try something completely different, power over ethernet.

I wanted to avoid calling someone to setup an outlet at the center of the network for my house, so i setup a POE powered setup run from a switch in my office. The POE powers both switches in the core (couldn’t find any sort of 16 port POE powered option, and was trying to stick with unifi gear). Now the network is on managed switches unlike the Netgears i was using before, and no more extension cable.

 

 

DNS Structure

The goal of the DNS structure of my lab was primarily to create a very stable foundation. Second to that, was the addition of two services, the first a local DNS server to avoid loopback issues with my ISP, and the second was pihole ad blocking.

To set this up, UCS was chosen as a domain controller/DNS server over FreeIPA. Linux was chosen as the platform of choice as that is what a majority of my systems are, and I don’t have any windows server licenses. UCS was installed to a VM, and a second ubuntu VM was configured with PiHole. These were configured to handle local queries first, then everything else. If one of my local DNS servers is down, the clients won’t notice a change as everything uses Google DNS as backup.

Installing OctoPi on an RPi

3D printers have come a long ways in the past few years. The prices have plummeted for basic units, allowing anyone to buy them. The raspberry pi can be setup alongside a basic 3D printer to enable some amazing functionality. They can allow remote control, management, and monitoring of the printers. When combined with a pi cam, you can even create time lapses of the prints.  To do this, we will be running OctoPi on the raspberry pi.

OctoPi is a Raspberry Pi distribution for 3d printers. Out of the box it includes:

  • theOctoPrint host software including all its dependencies and preconfigured with webcam and slicing support,
  • mjpg-streamer for live viewing of prints and timelapse video creation with support for USB webcams and the Raspberry Pi camera,
  • CuraEngine 15.04 for direct slicing on your Raspberry Pi and
  • the LCD app OctoPiPanel plus various scripts to configure supported displays

Getting Started

  1. You will need a Raspberry Pi
  2. Compatible Webcam or Pi Cam
  3. SD or Micro SD card for the Pi (larger is better if you plan on storing the images/videos on it)
  4. OctoPi Image for the Pi (https://octopi.octoprint.org/latest)
    1. Make sure to download the correct version for your Pi
  5. Installation of OctoPi can be done in the same way as for MotionEyeOS shown here

Configuration

  1. Hook the Pi up to a monitor and a network connection and power it up.
  2. Once the Pi is done booting, it will show a login prompt, above this prompt is network information about the pi, including the IP address, in this case it is 192.168.1.26. This will be needed to access the WebUI of OctoPrint.
  3. Navigate your web browser to the IP address shown.
  4. Access control will be the first thing to setup in OctoPrint. This can be done by filling out your username and password in the dialog that shows up. If access control is not desired, then hit “Disable Access Control”, otherwise click on “Keep Access Control Enabled”. For this example, access control will remain enabled, in order to prevent unauthorized users from running the 3d printer.
  5. After that dialog, the primary control for OctoPrint will display. This will not be logged in however.
  6. Clicking login and typing in your username and password previously setup will get you the full range of configuration settings for OctoPrint.
  7. After logging in, you will see the successful message, and if updates are available, a message will display stating that as well. We will install updates before continuing in order to have the most up to date software available.
  8. Click through the update dialogs.
  9. After proceeding, there will be a flag in the corner stating that it is updating, and the update will be run by the Pi.
  10. The web browser will attempt to reconnect periodically as the server reboots. Once it has completed rebooting, you will get a prompt to reload the page.
  11. Clicking reload now will bring up any changes that should be addressed. For this version of OctoPrint, there are new settings for cura. These aren’t going to be covered in this tutorial, clicking “Finish” will close the dialog.
  12. We are now at the point to start working with the printer and its connection to OctoPrint.
  13. The first step is to determine the connection properties needed for your printer. For this example, we will use an Ultimaker 2. There should only be one serial port available. The baudrate should be supplied by the manufacturer, if not, it can either be looked up online, or simply by guessing. The printer profile will be the default profile. If the connection is successful, the settings can be saved and set to auto connect on OctoPi’s startup.
  14. Once successfully connected, you should start getting data back from the printer. The temperature graph will update with hot end temperature, and bed temperature (if available).
  15. The control tab will allow you to manually control the printer, moving the head in all directions, along with extruding the filament.
  16. Further settings can be found in the settings menu in OctoPrint, which has the wrench icon in the top bar of the GUI.
  17. GCode files for printing can be uploaded to the files menu in Octoprint

Resources

Installing MotionEyeOS on an RPi

Raspberry Pis are neat little computers that can be placed just about anywhere assuming there is power and network connectivity nearby. These were made even more convenient with the addition of built in Wi-Fi on the Raspberry Pi 3. One application of these small devices is for home security, as a small motion sensing webcam that can record 24/7, or only when there is motion detected.

There are ways to build up your own system using the basic raspbian distribution and various software packages, or you can use a custom built operating system for this purpose, MotionEyeOS.

MotionEyeOS has everything needed to run a security camera system, or simply a remote webcam monitoring system. This will cover setting up a camera in this tutorial for basic recording and monitoring. This will let us spy on our dog while away at work.

Getting Started

  1. You will need a Raspberry Pi
  2. Compatible Webcam or Pi Cam
  3. SD or Micro SD card for the Pi (larger is better if you plan on storing the images/videos on it)
  4. MotionEyeOS Image for the Pi (https://github.com/ccrisan/motioneyeos/releases)
    1. Make sure to download the correct version for your Pi

Installation

This will cover the process for installing the image to the SD card for windows. There are also guides for Mac and Linux available from the Raspberry Pi foundation here. https://www.raspberrypi.org/documentation/installation/installing-images/README.md

  1. Install Win32 Disk Imager
  2. Start the application
  3. Click the folder icon and navigate to your disk MotionEyeOS disk image
  4. Ensure the Device listed is the SD card you want to format with the image, in this case, it is the G:\ Drive
  5. Finally, hit the write button, and the image will be written to the SD card, which is all that’s needed to install MotionEyeOS.

Configuration

  1. Hook the Pi up to a monitor and a network connection and power it up.
  2. Once the Pi is done booting, it will show a login prompt, above this prompt is network information about the pi, including the IP address, in this case it is 192.168.1.26. This will be needed to access the WebUI of MotionEyeOS.
  3. After finding the IP address, navigate to it in your web browser of choice.
  4. To login, enter the username, “admin” and hit Login. There is no password by default.
  5. Click on the menu button in the top left corner to pull up the options
  6. Here you will be able to change the admin username and password, as well as the surveillance username and password. The surveillance user is a basic user who can view the webcams but not change settings.
  7. To add a camera, plug in a USB webcam or pi cam and hit the dropdown next to the menu button. From here you can select cameras that have been already setup, or you can add a camera.
  8. The webcam should show up as a camera, if it does not, you may need to install drivers for it.
  9. After adding the webcam, you can navigate to it and its settings by going through the same dropdown
  10. Once on the camera page, you can access a vast amount of settings. These let you configure working hours, motion sensing, save data locations for any videos or images, and more. If more advanced settings are needed, there is an option under General Settings to enable the advanced settings. Once settings have been modified, click the apply button in the top right corner of the settings pane to enable them.
  11. Below are screenshots of the system in action

By default, SSH and FTP are enabled on MotionEyeOS, so if you want to add services or install drivers to it, you can do so remotely. This also allows you to move any security footage off the pi if it is being used to store the footage.

Resources

Network Setup Evolution

The network in the house has never really sat well with me, starting with the pre-installed phone splitter to the gaping hole the contractors left in the wall for the cables. This then comes to a summit since there were NO outlets near the Ethernet for the entire house.

Revision 1‌

The first revision of my network started with a single 8 Port switch on a shelf with an extension cord running under the door. This was sub-optimal due to a few reasons: first, there are 10 ports to connect, and this was an 8 Port switch, easy to fix with a bigger switch, but second and more importantly, the power was from an extension cable going out of the closet. This was an ugly setup and needed to be updated.

The power cord went to the wall wart for the switch attached onto an extension cord that went outside the closet into a bathroom in order to actually get power for the network.

Revision 2

I removed the need for a power cable by swapping out to a POE setup. My office ran a POE switch, and the switch in the closet was swapped to one that can be powered by POE. The hardware for this is Unifi switches, the USW8-150W for power, and two USW8’s to be powered.

The POE switch sits in my office next to the rest of the core network gear (modem, router, ap).

The first revision of the POE powered switches came with a bit of… extra cable, once the proof of concept was working, I purchased some new ethernet cords and set it up for good.

I setup link aggregation between the switches in order to improve bandwidth between the two segments of the network. This was all simple using the Unifi controller.

VHost Build

I started looking at servers for a new vhost for a few months, trying to determine what I would need, and what I could utilize. I was running out of RAM in my NAS to run virtual machines, and I was running into odd times when I wanted to reboot my NAS for updates, but didn’t want to reboot my entire lab. So I came to the decision to build a new server that would host the virtual machines that were less reliant on the NAS (IRC bots, websites, wiki pages, etc) as well as other more RAM intensive systems.  The new server was to have a large amount of RAM (>32GB maximum), which would give me plenty of room to play. I plan on leaving some services running on the NAS which has plenty of resources to handle a few extra duties like plex, and backups. The new VHost was also to be all flash, the NAS uses a WD black drive as the virtual machine host drive, which ends up slowing down when running updates on multiple machines. I may also in the future upgrade the NAS to a SSD for its cache drive and VM drive.

Here is what I purchased:

System Dell T5500
CPU 2x Intel Xeon E5-2620
PSU Dell 875W
RAM 64GB ECC
SSD (OS) Silicon Power 60GB
SSD (VM) Samsung 850 EVO 500GB
NIC Mellanox Connect-x2 10G SFP+

The tower workstation was chosen as it was a quiet system, supporting ECC ram and a fair bit of expandability. The 10G NIC is hooked up as a PTP link to the NAS. The motherboards 1G NIC is setup as as simple connection to the rest of my LAN. The VM SSD is formatted as EXT4 and added to proxmox as a folder. All VMs are going to use qcow2 type files, as they support thin provisioning and snapshots. I considered LVM-Thin type of storage, however this seemed to add complexity without any feature/performance gains.

The side effects of RAM issues

I have been fighting failing parity checks for a few months now on my unraid server. I looked into each disk, checked smart stats and even thought I had found the culprit hard drive that was causing the issues. I still had it in my array but with no data on it just in case. This all happened just before another set of problems arose. The VMs on my server started acting up, crashing, and eventually when logging into one VM, everything crashed due to memory problems. I ran memtest and discovered that one of my RAM sticks was at issue, and from there determined that it simply wasn’t seated properly. After reseating the RAM, everything started working properly again. Parity checks come back clean, no more kernel panics, and the VMs are running stably. One partially unseated RAM stick caused all those issues.

Docker Struggles

I was originally excited when docker was going to be included in the next release of unraid, the concept behind it was solid and sounded like it would make management of my server easier. This was the case for months before docker started acting up. Now I’ve been working on a way to remove any need of docker on my NAS, moving it to a VM or another server due to its instabilities. Issues I’ve run into include it not being able to stop running containers, start stopped containers, create new containers, and preventing Linux from shutting down. I could live with all of the above except the shutdown bug. It doesn’t just prevent shutdown from running, but it prevents the kernel from shutting down at all, and well after the user shells are all offline, so there’s no way to manually kill docker to allow the system to shut down safely. This is exceptionally frustrating and has caused unclean shutdowns when I’ve lost power and even when I’m just doing maintenance, since the only way to restart when docker does this is to do a hard reset. I’m not giving up hope on containers, just going to be a bit more careful around docker, they seem to advertise quite well compared to issues people have had with their software.

NAS Build

I started my original NAS build with inexpensive quality consumer components, but by now its become a strange chimera of enterprise and consumer gear. The main goals: low power, quiet, high storage density

With the focus, the main decision was on a case, 8 hdd’s were the minimum number of bays, and having a few 5.25″ bays allowed me to use a 5×3 cage to add more hdd bays. From some research, it can also be found that another stack of hdd cages can be added to the case with relative ease, bringing the total number of disks held to ~21.

Case Fractal Design Define XL R2
CPU Intel Xeon E3-1245 v2
Motherboard Asrock Pro-4m
PSU Antec BP550
RAM Gskill Ripjaws X (32GB total)
HBA LSI 9201-16i
HDD Various
NIC Intel Pro/1000 VT, Chelsio dual port 10G SFP+
Extras Norco 5 x 3.5″ HDD Cage

The HDD list is a bit eclectic, i have used whatever is on sale and cheap.

Brand Size Model
WD 2TB Green
WD 2TB Green
WD 2TB Green
Seagate 3TB Barracuda
WD 3TB Red
WD 3TB Green
Seagate 4TB Barracuda
HGST 6TB NAS
Toshiba 6TB x300
WD 500GB Black

The NAS is used to host a couple of virtual machines and docker containers and it runs unRAID for the operating system. The motherboard for the system as picked for a low price and high expandability. The mATX board has 3 PCIE x16 slots, which will easily handle hba’s, 10Gb networking, etc.

The 5×3 cage added a large bit of storage density to the case. I replaced the original fan with a noctua to reduce noise.

Heres a shot of the 5×3 from the outside. It is not a hot swap capable unit, and also has no backplane. This reduces some of the complexity and potential failure points in the unit. I also don’t need any of this, so this 5×3 cage costed very little. I drilled new mounting holes in the case though to set it back about a half an inch. This allows the fan to pull in from the slots on the side of the case for airflow.

A bit dusty, but it keeps running. The HBA is mounted in a PCIE x16 slot.

The case is a giant fractal design define xl r2. The case is lined with sound insulation making it a nice quiet obelisk.

Ansible Setup

Having been running short on time to maintain my servers, I decided to look into some automation on that front. I came across Ansible, which allows management of multiple servers configuration and installation using some of the basic software that’s pre-installed: python and SSH.

Setting up ansible is the easy part. This can be done by simply setting up the Ansible host with SSH key based access to all machines that it will be managing. I set it up with root access to those machines so that it could do mass updates without problem or requesting dozens of passwords and because I don’t have Kerberos or a domain based login system.

Prerequisites on the systems:

  • Installed software before using ansible on a system
    • python (2.x)
    • python-apt (on debian based systems)
    • aptitude (on debian based systems)

Installing Ansible on the host machine was as easy as yum install ansible, though for a more recent version, one can install it from the GIT repository. This is covered in many other locations so it won’t be covered here.

Run the following commands on the machines that will be managed by Ansible:
>mkdir ~/.ssh
>touch ~/.ssh/authorized_keys
>sudo su
>mkdir ~/.ssh
>touch ~/.ssh/authorized_keys
>chmod 700 ~/.ssh
>chmod 700 /home/administrator/.ssh

These commands create the .ssh directory if it doesn’t already exist, and put the authorized_keys file in it. They then lock down the .ssh directory as is needed for the SSH server to trust the files within haven’t been compromised. If the directory were left with the default read permissions to group and all, the SSH server wouldn’t let us log in using the SSH keys.

Run the commands on the ansible host:
>ssh-keygen -t rsa
>cat ~/.ssh/id_rsa.pub | ssh administrator@192.168.1.23 "touch ~/.ssh/authorized_keys && cat >> ~/.ssh/authorized_keys"

These commands generate a public key / private key pair to use with SSH key based logins to the systems. This then copies the public key over to the machine we will be managing.

Run the following command as root on the machines that will be managed by Ansible:
>cat /home/administrator/.ssh/authorized_keys >> ~/.ssh/authorized_keys

This command copies the public key from the local user to the root user, allowing Ansible to login as root and manage the machines.

Once SSH key based logins are enabled on all the machines, they will need to be added to the Ansible hosts file. This file tells Ansible the IP addresses of all the machines it should be managing. This is a basic text file and can be easily modified with nano. Add a group (header of “[Group-Name]”) to the file with your hosts underneath it, and example is shown below.
>[rapternet]
>shodan ansible_ssh_host=192.168.2.1
>pihole ansible_ssh_host=192.168.2.2
>webserv ansible_ssh_host=192.168.2.3
>matrapter ansible_ssh_host=192.168.2.4
>unifi ansible_ssh_host=192.168.2.5

This adds the rapternet group with shodan, pihole, webserv, matrapter, and unifi servers in it.

One easy way to test the ansible setup is to ping all the machines:
>ansible –m ping all

This tests their basic setup with ansible (requirement of python 2.x on each machine, ssh connection functioning).

As can be seen, all servers are outputting that nothing has changed (which is to be expected with a simple ping) and that the pong response was sent back to the host. This is a successful test of the Ansible setup.

References: