Connecting Inspircd and Anope on Docker-Swarm

I encountered some issues when connecting Anope and Inspircd on the docker swarm. When running them on a single node using docker-compose, the services were able to connect just fine to Inspircd, however running in a docker swarm, there were issues in how Inspircd was filtering IPs in the default services XML block.

In order for Inspircd and Anope to talk I had to comment out some of the link block for the services. The allowmask and expected IP address didn’t quite work as intended inside the docker swarm cluster. Since the containers use a local network available to only them, I’m not worried about a rogue services server getting in. This IRC server is also primarily used for developing and testing the IRC bots I have written over the years, so its not a core part of my infrastructure. Below shows the link block that I modified from the default included in the inspircd docker container setup.

Continue reading “Connecting Inspircd and Anope on Docker-Swarm”

Home Assistant Supervised

I was originally running a home assistant VM using a script that loaded HASS into proxmox, however the HASS.io image didn’t have the tooling that I like having on a machine for general management of the system. I started researching and took a few turns around to find the right installation script, but I did and included the commands below. The following shell commands and script install Home Assistant Supervised on a general X86 machine. This worked fine on Ubuntu 18.04 LTS.

sudo su
apt-get install -y software-properties-common apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat

systemctl disable ModemManager
systemctl stop ModemManager

curl -fsSL get.docker.com | sh

curl -sL "https://raw.githubusercontent.com/Kanga-Who/home-assistant/master/supervised-installer.sh" | bash -s -- -m qemux86-64

The default port for HASS is 8123, so go to the server IP:8123 to finish configuration. This will give you all the benefits of the HASSio images for raspberry pi and intel nucs but with your own preferred operating system rather than using the build thats included with the home assistant built images (which are very limited).

Sqlite Resolution on UnRaid

Like many, I had issues with sqlite database corruption on Unraid. I found while researching it that it had to do with file locks in the fuse file system unraid uses to merge disks. I found the best way to circumvent this is to use the cache disk for those databases and map them directly. My docker mappings now point to /mnt/cache/share instead of going /mnt/user/share. This solved the issue better than the 6.8.0rc5 fixes. This avoids the system in question completely and solved the stability issues for me.

NextCloud Setup: Joplin

Joplin was relatively simple to setup on NextCloud. The only difficulty is setting the folder it should run out of. This is done via the webdav url that Joplin connects to. An example is shown below on how to get this going.

https://nextcloud.rapternet.us/remote.php/dav/files/<username>/<path to Joplin directory>

The webdav url just needs your username and the path it should utilize, in my case I just have Joplin setup to go to Documents/Joplin. This would go in the tag above, just add a username and it’s done. This does have to be setup the same way on all Joplin clients or you will end up with files missing from one or another, though they will all be in NextCloud.

NextCloud Setup: Desktop Client

Setup is easy enough IF http does not redirect to https. The authentication mechanism for the NextCloud desktop client for some reason doesn’t work when http is redirected to https (ex: in the way let’s encrypt sets it up). With the right server setup after that, everything is straight forward.

The Side Effects of RAM Issues

I have been fighting failing parity checks for a few months now on my unraid server. I looked into each disk, checked smart stats and even thought I had found the culprit hard drive that was causing the issues. I still had it in my array but with no data on it just in case. This all happened just before another set of problems arose. The VMs on my server started acting up, crashing, and eventually when logging into one VM, everything crashed due to memory problems. I ran memtest and discovered that one of my RAM sticks was at issue, and from there determined that it simply wasn’t seated properly. After reseating the RAM, everything started working properly again. Parity checks come back clean, no more kernel panics, and the VMs are running stably. One partially unseated RAM stick caused all those issues.