PhotoPrism MySQL DSN

Setting up the service was simple enough, I chose to run it on my UNRAID server however this meant I couldn’t use the example docker-compose file. I took all the settings from it and set it all up on my UNRAID server with the only changes being to paths for my file structure and the SQL database information. I use a MySQL database server for most of my services, so I needed to get the DSN for that server, however this was the first time I used DSNs, however it is pretty simple to setup correctly. I used PHPmyAdmin to add a user and database for PhotoPrism, and from there I used the default DSN with my username/password and host/port information

<user>:<password>@tcp(192.168.1.11:3306)/<databaseName>?charset=utf8mb4,utf8&parseTime=true

After that, everything was simple to setup and use. I logged in and was ready to start ingesting images.

Configuring PhotoPrism with NextCloud Sync

PhotoPrism is easy to setup with NextCloud sync, though it seems a bit odd the way its done. The NextCloud server is added to Settings / Backup in PhotoPrism, then you can click the circular arrows under the sync heading, and setup sync. To pick up the NextCloud auto uploads from my phone, I selected the InstantUploads folder to sync, with a daily interval. I told PhotoPrism to download remote files, preserve filenames, and sync raw and video files. I didn’t want PhotoPrism to upload to NextCloud as I wanted it to be a one way file sync to gather data. My PhotoPrism server is setup as a data viewer and not generating data itself as well. Then a flip of the “enable” switch and its complete. The first sync didn’t start instantaneously, however it did start not too long after setting it up, and it took a few hours to slowly sync and process all the files in my NextCloud folder (a few thousand files).

Fixing the Docker Swarm tasks.db

The Issue: The docker swarm manager node becomes useless after the tasks.db file explodes in size. This can be seen by worker nodes not being able to connect to the swarm, or manager nodes not seeing the other manager.

The Fix: Stop the docker service (service docker stop), delete or move the tasks.db file, start the docker service (service docker start). This seems too simple to be true, but it isn’t, the tasks.db file can be safely removed and regenerated by the docker swarm manager.

Adding UCS Authentication Account

Adding an account to use in authenticating against the LDAP directory is a simple enough. The process is done all within the LDAP directory GUI from the Domain menu option in UCS. Navigate to the “user” container, and select the add button. Select the type of the account to be a “Simple Authentication Account”, pick a username and password and click add.

  • Domain -> LDAP Directory
  • User Container, Add
  • Type: Simple Authentication Account
  • Username: my-new-auth-account
  • Click Add
  • Profit

This user account can now be used in a service to authenticate against the LDAP server.

Continue reading “Adding UCS Authentication Account”

Connecting Inspircd and Anope on Docker-Swarm

I encountered some issues when connecting Anope and Inspircd on the docker swarm. When running them on a single node using docker-compose, the services were able to connect just fine to Inspircd, however running in a docker swarm, there were issues in how Inspircd was filtering IPs in the default services XML block.

In order for Inspircd and Anope to talk I had to comment out some of the link block for the services. The allowmask and expected IP address didn’t quite work as intended inside the docker swarm cluster. Since the containers use a local network available to only them, I’m not worried about a rogue services server getting in. This IRC server is also primarily used for developing and testing the IRC bots I have written over the years, so its not a core part of my infrastructure. Below shows the link block that I modified from the default included in the inspircd docker container setup.

Continue reading “Connecting Inspircd and Anope on Docker-Swarm”

Home Assistant Supervised

I was originally running a home assistant VM using a script that loaded HASS into proxmox, however the HASS.io image didn’t have the tooling that I like having on a machine for general management of the system. I started researching and took a few turns around to find the right installation script, but I did and included the commands below. The following shell commands and script install Home Assistant Supervised on a general X86 machine. This worked fine on Ubuntu 18.04 LTS.

sudo su
apt-get install -y software-properties-common apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat

systemctl disable ModemManager
systemctl stop ModemManager

curl -fsSL get.docker.com | sh

curl -sL "https://raw.githubusercontent.com/Kanga-Who/home-assistant/master/supervised-installer.sh" | bash -s -- -m qemux86-64

The default port for HASS is 8123, so go to the server IP:8123 to finish configuration. This will give you all the benefits of the HASSio images for raspberry pi and intel nucs but with your own preferred operating system rather than using the build thats included with the home assistant built images (which are very limited).

Sqlite Resolution on UnRaid

Like many, I had issues with sqlite database corruption on Unraid. I found while researching it that it had to do with file locks in the fuse file system unraid uses to merge disks. I found the best way to circumvent this is to use the cache disk for those databases and map them directly. My docker mappings now point to /mnt/cache/share instead of going /mnt/user/share. This solved the issue better than the 6.8.0rc5 fixes. This avoids the system in question completely and solved the stability issues for me.