Connecting Inspircd and Anope on Docker-Swarm

I encountered some issues when connecting Anope and Inspircd on the docker swarm. When running them on a single node using docker-compose, the services were able to connect just fine to Inspircd, however running in a docker swarm, there were issues in how Inspircd was filtering IPs in the default services XML block.

In order for Inspircd and Anope to talk I had to comment out some of the link block for the services. The allowmask and expected IP address didn’t quite work as intended inside the docker swarm cluster. Since the containers use a local network available to only them, I’m not worried about a rogue services server getting in. This IRC server is also primarily used for developing and testing the IRC bots I have written over the years, so its not a core part of my infrastructure. Below shows the link block that I modified from the default included in the inspircd docker container setup.

Continue reading “Connecting Inspircd and Anope on Docker-Swarm”

Apache/PHP Update

Back when I setup my webserver to run wordpress and dokuwiki, all the guides that I found and the guides that I followed had apache get setup with mod_php. However, after seeing this ArsTechnica WordPress Writeup, I realized that this might not have been the best way to get things going. I decided that this would be a good time to upgrade some of my webserver, so I started working on it, while my server was already setup, and the guide was on setting up a new server, the process is nearly identical, the only differences is that I had to disable/enable modules in a certain order for everything to work.

Continue reading “Apache/PHP Update”

Automated Testing in Jenkins

After setting up Jenkins to auto build and deploy my IRC bots, I decided to add the next component of a CI/CD stack, automated testing. This was to use ant to run the tests, and JUnit for the testing framework for the application. I’ll be setting this up in both of my Jenkins projects, I have one that just builds my applications, and a second that builds the docker containers and pushes them to the repository.

My Jenkins setup includes 2 builds for each of my projects:

  • Java Build
    • Builds the java application
    • This is just a general Jenkins project that uses ant to build
  • Docker Build
    • Builds the docker container, tags it, and pushes it to my local docker repository
    • This uses Jenkins pipelines to perform the build, tag, push

The docker build pushes my production code and is used by my docker-swarm to update my locally built containers. This is the important build. The general java build is just me experimenting with Jenkins builds following the non-pipeline route.

Continue reading “Automated Testing in Jenkins”

Jenkins Docker Revisit

After my initial jenkins setup, I thought my system would be good to go for a long time, however I encountered a problem with permissions after my docker cluster reboot. After all my nodes were back up, and jenkins was running, it could no longer access the docker.sock that it used to handle building and pushing containers. I tried a few things, rebuilding the container, updating it, changing some groups, and found quite a few threads on the topic. Some people had chmod’d the docker.sock to 777 (BAD) or had given jenkins root (ALSO BAD). I ended up finding the solution in using a specific entrypoint script that would determine the group to add to the jenkins user, then launch jenkins using the jenkins user from root.

Most of my additions are from sudo-bmitch’s jenkins-docker repository on GitHub. These include the dockerfile changes and the entrypoint.sh script (as well as the healthcheck mentioned later on).

Continue reading “Jenkins Docker Revisit”

Docker Cluster Reboot

Due to unforeseen events, I ended up having to shutdown all of my servers. Due to this, my docker VM cluster ended up getting its first chance to reboot the entire stack of all nodes. This ended up showing some problems in my configuration sadly, as well as a docker issue I had previously encountered in my raspberry pi cluster.

Continue reading “Docker Cluster Reboot”

Raspberry Pi Docker Cluster

I’ve always wanted to experiment with clustering technologies, I tried setting up a kubernetes cluster however that ended in failure. For this next experiment, I went with something simpler to deal with, docker swarm. Since docker and swarm are supported on raspberry pi’s, and since i had a number of raspberry pi’s not in use, I decided to use them for the cluster.

I printed a 2U rack mount kit for raspberry pis. I felt like this would be the perfect time to make use of it. I racked up 2 raspberry pi 3B+ units with POE hats (more on that later) and went to use those for the docker cluster. I added Samsung 32GB micro sd cards for storage.

Continue reading “Raspberry Pi Docker Cluster”

Home Assistant Supervised

I was originally running a home assistant VM using a script that loaded HASS into proxmox, however the HASS.io image didn’t have the tooling that I like having on a machine for general management of the system. I started researching and took a few turns around to find the right installation script, but I did and included the commands below. The following shell commands and script install Home Assistant Supervised on a general X86 machine. This worked fine on Ubuntu 18.04 LTS.

sudo su
apt-get install -y software-properties-common apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat

systemctl disable ModemManager
systemctl stop ModemManager

curl -fsSL get.docker.com | sh

curl -sL "https://raw.githubusercontent.com/Kanga-Who/home-assistant/master/supervised-installer.sh" | bash -s -- -m qemux86-64

The default port for HASS is 8123, so go to the server IP:8123 to finish configuration. This will give you all the benefits of the HASSio images for raspberry pi and intel nucs but with your own preferred operating system rather than using the build thats included with the home assistant built images (which are very limited).

NVR Build

While using MotionEyeOS on a raspberry pi 3 worked for a while, eventually I began to need a bit more hardware to handle some of the security camera goals I had. For this, since I already had my hand crafted 19in rack, I wanted to find a decent rack mount server for it, preferably SuperMicro.

I ended up finding a Redwood Director RDIR-1G on ebay for a good price. After doing some research, it l found out that it was a SuperMicro box, specifically it was a 5018D-MF. This is a solid little 1U half depth box with an E3 server processor on it. The listing claimed it had 32GB of RAM, however I found out later that it only had 8GB ram. This was still a pretty decent deal and I went with it.

Of course, once it arrived whats the first thing we do, open it up.

Initial Look Inside
Continue reading “NVR Build”

Virtualized Docker Swarm

Why

Why not? In reality, I always wanted to play with clustering, originally with proxmox and ceph, but I never had enough hardware to do so. I do however have a proxmox node with enough ram that I can host multiple lightweight nodes.

Docker swarm is lightweight enough that I can virtualize the entire cluster on my single proxmox host. While this isn’t fault tolerant like a cluster across multiple nodes, it does mean I can reboot cluster nodes for kernel updates and maintain my uptime. I also am able to add additional docker swarm nodes on separate hardware if I get additional hardware, and there is the benefit of having the cluster load balance itself for which software is running on which node.

Benefits

Each node in the cluster is identical, each can be replaced by following the exact same process and while I don’t have automated deployment of new nodes, they are still closer to cattle than many of my other virtual machines. Due to the goal of replicated storage between the nodes, I should also be able to take a single node and rebuild the entire cluster if needed, since it would have the entire clusters configuration.

Continue reading “Virtualized Docker Swarm”

3D CAD Model Organization

I’ve been trying to come up with a way to handle storing and version controlling my 3D models (both from thingiverse, and models I create myself), however i ran into quite a few deadends when looking for software built for this purpose (PLM/PDM), so I’ve figured it would easiest to build a folder structure with some metadata alongside it to go on top of GIT, and allow GIT to deal with version control. While STL files are text, a majority of my models are in proprietary binary formats, but GIT can still at least store the files and provide a history. I figure the repository would have 3 primary folders, parts, products, 3rd party (more or less unstructured). Each part/product would also have an export folder which will store a copy of STLs, and metadata to go with for a specific published version of a model (might be duplicate data, maybe use some type of tag in GIT to represent each officially published version).

  • Parts: not very useful as a single component, but used to create products, or as spare components to some item
  • Products: a grouping of parts to encompass a single object, or a single piece that makes up the object (think one piece phone stand, vs a multi-piece assembly). While these may seem separate in their usage, they will be the final product of whatever is created.
  • 3rd party: these may or may not follow the data policies, things like the ultimaker 2 models/plans and the backblaze storage pod models which are neat to have, but we won’t be applying our policies to those large assemblies. These can also include the innumerable thingiverse models we all acquire, eventually the goal will be to incorporate all thingiverse metadata into the dataset as well to provide details locally for all models herin.
  • Exports: Each part or product can have versioned exports as well, these will be one specific version of the STL, assembly, and tags, allowing a product to reference the version of the export until both are updated to support newer versions or varients.
Continue reading “3D CAD Model Organization”