Power bills were killing me, collapsed it all onto one server!

It’s still technically a Docker Swarm, but it’s a single node. I have backups, cold and hot and offsite. Nothing is mission critical, and I’ve moved stuff around from system to system many times now so it’s fairly drama free.

First migration was from TrueNAS Core to TrueNAS Scale, because I really wanted Plex to be able to do hardware transcoding and Core wasn’t having it. Scale was a travesty. Not sure why/how, but it was WAY slower than core, and the Kubernetes subsystem lasted about 20 minutes and just went into an endless loop of being broken, never to recover. VM performance was terrible somehow, and disk throughput was mediocre. Mind you, NO HARDWARE changed, it was an in-place upgrade from Core to Scale. Server was not even opened or moved, so it was not a happy experience.

Next migration after 3 days of Scale not cutting it was to Ubuntu 22.04 LTS. It imported the ZFS pools and mounted them up without an issue. Setting up Docker and Plex took about 15 minutes total, then added Cockpit and the KVM support. Very simple install and it all worked, and was probably 10x faster than it was on Core. This prompted me to finish up the Docker migration so everything I had is now Docker (except Plex). Yes, I know Plex can be a container, there’s one running that just feeds a non-plexpass music-only stream for the Alexa’s, but the main one is native on Ubuntu. It’s a preference. One VM, Kali, starts up when I need it. Otherwise I have 35 docker containers running for various services. At some point I opened up the box and cleaned out all the dust, and counted RAM sticks while I was in there and found that it actually had 64GB, not 32GB, but the previous owner had put the sticks in the wrong places for a single CPU box, so now I have 64! It’s a very happy camper now, and that change came AFTER all the migration issues, so it doesn’t really contribute to the speed, but does help in how many containers it can run. With 35, it’s using about 12GB of the 64, and that’s with Seafile, Joplin, *Arr’s, Plex Media Manager, Overseer, Minio, Netbox, podgrab, traefik, and a bunch of other stuff. It’s amazing how much you can put on one box! Yes, I have backups that run every night to the cloud and two sets of cold backup drives that cycle out of my parents house.

Lazy Docker

Well, for being a tool that seems pretty simple, I found 3 problems that were spinning on my cluster in the first 3 minutes I used it!

Very cool! Very much worth the install, even if it pipes an install script to bash (read it yourself and act appropriately)

curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash

And check it out here: https://github.com/jesseduffield/lazydocker

Docker Swarm

Do you actually need Kubernetes? Netflix needs it, a few others probably do too. If you need to learn it for work or something, go right ahead, but chances are good that YOU don’t need to scale to 10,000 nodes at home. I didn’t want Kubernetes (or k8s, or k3s, or Minikube) at home since I am pretty limited for resources and want as much bang for my virtual buck as I can get.

I built mine out across Proxmox host(s) with Debian 10 VM’s (not containers) and TrueNAS virtual machines (Debian 10 also), but it doesn’t really matter, just try to stay with the same version of Docker across them for your own sanity. You can probably do this with multiple Raspberry Pi’s, but mine are otherwise occupied. I don’t know if you can (or should) combine the different machine types into one cluster, that seems like it would be bad. And many things need different images for the pi and would need to account for that. I’m setting up a Pi 1 with an 8TB Easystore to live at my parents house for backups and it has to use different images for minio and possibly Kuma. It’s not fast, but it doesn’t need to be.

Steps:

  • Base OS
  • Docker (clone them here if needed)
  • Swarm init
  • Swarm join cluster
  • Test it!
  • NFS Mounts for data (optional, depends on your needs)
  • Traefik
  • Configure deployments
  • Swarmpit (optional)
  • Swarmprom (optional)
  • Portainer (optional)
  • Apps and apps and apps (this is why you’re here)
  • Proxied services outside of the swarm (optional)
  • Add basic authentication to an app that doesn’t have any (optional)
  • Apps that are docker but NOT public (optional)

Code and scripts are all here!

https://github.com/8layer8/swarm-public/tree/main

While you don’t need all of this, it does help to have something to start with.

Base OS

Do a basic Debian 10 install, set up disks, networks and hostnames as you need them, unselect the GUI, you really only want ssh and base utilities. Once it is up, ssh into it and run:

apt-get update
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common sudo vim mc

Docker (clone them here if needed)

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update
apt-get -y install docker-ce docker-ce-cli
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker brad
sudo newgrp docker
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Continue reading “Docker Swarm”

Remix: Ubuntu 20.04 LTS Server + Docker + Portainer + App Repository

Build minimal Ubuntu 20.04 server, add ONLY OpenSSH during installation.
Watch your disk partitions! You will be using a lot (eventually), the majority will end up under /var, so crank that up, or go with one partition, just be warned.

After the base install:

ssh to box:

# I'm bad, I do it all as root
sudo su - 
apt update
apt upgrade

Install Docker:

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent 

sudo apt-get install software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository  "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

sudo apt -y install mc iperf3 iptraf-ng

sudo docker run hello-world

Install Portainer

docker run -d --name portainer --restart unless-stopped -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Links

# Portainer is on:
http://SERVER_IP:9000/

You may have to reboot the host if this is the first time through!

Beware that firewalld and iptables are all in this mess! You may have to enable ports or disable the firewall on the LAN side to get your containers to work.

If you can’t build a new container or can’t connect, reboot the box at least once before freaking out.

Portainer templates

  • Open Portainer (http://SERVER_IP:9000/)
  • Double click on your host
  • Go to Settings
  • Go to App Templates
  • Select “Use External Templates”
  • Paste in:
  • https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json
  • Click “Save Settings”
  • Go to “App Templates” in the blue bar menu
  • Turn on “Show Container Templates”

Useful scripts for your Docker host:

# docker-cleanup.sh
date
df -h
docker image prune -a
docker container prune
docker system prune
date
df -h
# dockerlogs.sh
#!/bin/bash
DARKGRAY='\033[1;30m'
LIGHTRED='\033[1;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
LIGHTPURPLE='\033[1;35m'
CYAN='\033[0;36m'

COLORS=($DARKGRAY $LIGHTRED $GREEN $YELLOW $BLUE $LIGHTPURPLE $CYAN )
color_stop=$(printf '\033[0m')
size=${#COLORS[@]}


names=$(docker ps --format "{{.Names}}")
echo "tailing $names"

while read -r name
do
  index=$(($RANDOM % $size))
  color_start=$(printf ${COLORS[$index]})

  # eval to show container name in jobs list
  eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/${color_start}[-- $name --]${color_stop} /\" &"
done <<< "$names"

function _exit {
  echo
  echo "Stopping tails $(jobs -p | tr '\n' ' ')"
  echo "..."

  # Using `sh -c` so that if some have exited, that error will
  # not prevent further tails from being killed.
  jobs -p | tr '\n' ' ' | xargs -I % sh -c "kill % || true"

  echo "Done"
}

# On ctrl+c, kill all tails started by this script.
trap _exit EXIT

# Don't exit this script until ctrl+c or all tails exit.
wait

NGINX Proxy Manager (aka Build your own F5)

Lots of ways to do this, but this is pretty easy:

Install portainer
Configure new repo for templates:
https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json

Install NGINX Proxy Manager, forward ports: 443:4443, 80:8080 and MAYBE 81:8181
Mgmt port is 81, so keep that protected as needed depending on where you built it.

Default login is:  admin@example.com Password: changeme
Change that immediately.

Saves configs to:
/portainer/Files/AppData/Config/Nginx-Proxy /config

Lets say you want to set up Sickgear on http://192.168.0.55:8081/ to respond to sickgear.mydomain.com

  • Add DNS alias for sickgear.mydomain.com
  • Wait for it to resolve
  • Log into your NGINX Proxy Manager
  • Hosts – Proxy hosts
  • Click the “Add Proxy Host” button
  • Fill in sickrage.mydomain.com in the domain name
  • The Scheme, Forward hostname/IP and port are all dependent on the backend, so our “http://192.168.0.55:8081/” gets split up into http, 192.168.0.55, and 8081
  • You’ll have to decide if you need caching or websockets support, you can likely turn them all on and see if it works properly
  • Access lists can be applied, but have to be set up before you can use them, you can come back to this.
  • if you are stacking services on top of a single domain name, then custom locations is where you split out /path to http://otherhost:1234, we aren’t doing that right now
  • If you need an SSL cert, go to the SSL tab and choose “Request a New SSL Certificate”, you will need your email address that is associated with your Lets Encrypt account, and click the I Agree to the TOS. The defaults are fine, you can tighten things up if you need to later.
  • Click Save
  • Usually the cert and forwarding is ready to go in under a minute if you have things forwarded properly (generally 80 and 443 to the NGINX Proxy Manager is all you need)

You should now be able to hit your https://sickrage.mydomain.com/ and have a valid SSL certificate.

Building a single IP Docker/NPM/Web host

Assumes you have a single public IP (VPS, DO, whatever) and need to lock down to just specified ports and allow NPM to work internally.

Started with Ubuntu 20.04 server, but could be CentOS or RHEL or whatever.

Selected nothing on install, then installed docker like so:

apt-get install apt-transport-https ca-certificates curl gnupg-agent

apt-get install software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

apt-get update

apt-get install docker-ce docker-ce-cli containerd.io

apt -y install mc iperf3 iptraf-ng

Start portainer

docker run -d --name portainer --restart unless-stopped -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Fixing UFW

vim /etc/ufw/ufw.conf
(change enabled to yes, save and exit)

systemctl start ufw
systemctl enable ufw

ufw allow ssh
ufw reload

This is a workaround to make ufw do what you expect it to do (block ports by default and allow the ports you want) because it conflicts with Docker networking.

This is directly from here: https://github.com/chaifeng/ufw-docker

Edit the ufw rules by running this:
vim /etc/ufw/after.rules

And after this section:
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT


Add this new section:

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:ufw-docker-logging-deny - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16

-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN

-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 172.16.0.0/12

-A DOCKER-USER -j RETURN

-A ufw-docker-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW DOCKER BLOCK] "
-A ufw-docker-logging-deny -j DROP

COMMIT
# END UFW AND DOCKER

Save and exit :wq!

Restart ufw:

systemctl restart ufw
ufw reload
ufw allow ssh
ufw reload

This rule set is slightly weird in that to allow ports inbound, say you started NPM on ports 8080 and 4443 (the container ports, not the host mapped ports) you would need to run these commands to let ufw allow communications on the docker network:

ufw route allow proto tcp from any to any port 8080
ufw route allow proto tcp from any to any port 4443
ufw reload

This is enough if you are going to use npm to front other services you load onto the docker host.

( I set up portainer, custom repo for templates, Nginx Proxy Manager, and added a dns record to resolve properly, added a proxy host in NPM to send that domain name to the box’s IP and portainer port (172.17.0.2:9000) used my own setup guides on this site)

Read that again! It’s important that you use the internal IP within the NPM setup to route traffic correctly.

I also do not have the portainer or NPM admin utilities exposed by default, I use ssh port forwarding to keep them off the internet:

ssh -i “~/.ssh/id_rsa_do” -L 9000:127.0.0.1:9000 -L 8181:127.0.0.1:81 root@my.ip.add.ress

This minimizes the ports to 22, 80 and 443.

Cite your sources…
https://www.linode.com/docs/guides/configure-firewall-with-ufw/
https://stackoverflow.com/questions/30383845/what-is-the-best-practice-of-docker-ufw-under-ubuntu
https://superuser.com/questions/590600/ufw-is-active-but-not-enabled-why

Redhat Enterprise Linux – For Free!

Starting at the beginning of February 2021, Redhat developer accounts (free) can install up to 16 Redhat Enterprise Linux (RHEL) servers, physical, virtual or hypervisors in any combination.

Log into redhat.com, make a new account if needed, by default, this is what you get. No muss, no fuss.

16 Servers, I need 1, maybe 2 physicals and a virtual or two. Cool.

Download RHEL 8 install iso: https://access.redhat.com/products/red-hat-enterprise-linux/

Install as usual, no trick there. Use Ventoy on USB for the easiest install.

Once it’s up, run:
subscription-manager register
Give it your login and password when prompted.

subscription-manager attach
(This can take a minute)

yum update
yum should work and you’re on the production repo now!

Done!

Redhat management links:

Dashboard: https://access.redhat.com/management

Details: https://access.redhat.com/management/subscriptions/product/RH00798

Systems using this license (i.e. how many have I used): https://access.redhat.com/management/subscriptions/product/RH00798/systems

You can remove a system if it’s no longer in service and get the license back: https://access.redhat.com/management/systems/

You can use RHEL now in place of where you used CentOS in the past, and the migration path to full blown RHEL is a phone call to buy the license. If you don’t need the support/license, then you need do nothing else. No one will call. They won’t hassle you to buy anything ever.

This lets you run RHEL like the big boys for free and it’s completely legit!

RHEL 8 + libvirt + docker + portainer

Build minimal install box of RHEL 8. You can choose Virtual Host and Headless administration and it covers most of this right off the bat, it will just skip them in the yum commands.

Watch your disk partitions! Changing storage after the fact isn’t terrible, but if you need to change storage, follow this: https://linuxconfig.org/configure-default-kvm-virtual-storage-on-redhat-linux#h6-1-create-new-virtual-storage-directory

This is RHEL, so register it with your free developer account (16 Free RHEL servers? Yes, thank you!)

SSH should be enabled and running:

sudo yum install openssh-server
sudo systemctl enable sshd
sudo systemctl start sshd
sudo systemctl status sshd
ssh to box:
yum -y update
yum -y install cockpit cockpit-machines
yum -y install qemu-kvm libvirt libguestfs-tools virt-install
yum -y install mc iperf3 iptraf-ng
yum -y install virt-install virt-viewer
systemctl start cockpit.socket
systemctl enable cockpit.socket
systemctl status cockpit.socket
firewall-cmd –add-service=cockpit –permanent
firewall-cmd –reload
# Cockpit is on: https://SERVER_IP:9090/

modprobe fuse
virt-host-validate
systemctl start libvirtd.service
systemctl enable libvirtd.service
systemctl status libvirtd.service

If needed, set up network bridge:
Cockpit – Networking – Add Bridge
Select your ethernet card (em1)

Should now be able to build virtual machines from within cockpit. You might need to reload the page or go out and back in for it to figure out libvirt is enabled.

If you can’t build a new VM, reboot the box at least once before freaking out.
ISO’s to install from:
ISOs should live in /var/lib/libvirt/images/ probably want to mount that as a read only nfs. If you mount iso’s somewhere else you can have permission issues that are tough to get around. Even when mounted in the above dir, I had issues selecting the OS when choosing the ISO, try picking the OS first if it won’t let you pick it after the iso selection.
mount:
go to cockpit – storage and add the NFS or SMB share, stupid easy.

Netdata:

yum update
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
#(wait) (answer any prompts)
systemctl start netdata
systemctl enable netdata

# Add to IPTables via firewall-cmd:
firewall-cmd --zone=public --permanent --add-port=19999/tcpservice firewalld reload
http://SERVER_IP:19999/
Done!

Docker:

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache
yum remove buildah podman containerd runc
yum -y install docker-ce
systemctl enable --now docker
systemctl status docker
usermod -aG docker $USER
docker version
docker pull alpine
docker images
docker run -it --rm alpine /bin/sh

exit


Docker compose:

curl -s https://api.github.com/repos/docker/compose/releases/latest \
  | grep browser_download_url \
  | grep docker-compose-Linux-x86_64 \
  | cut -d '"' -f 4 \
  | wget -qi -

chmod +x docker-compose-Linux-x86_64
mv docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
docker-compose version


CTop

export VER="0.7.3"
wget https://github.com/bcicen/ctop/releases/download/v${VER}/ctop-${VER}-linux-amd64 -O ctop
chmod +x ctop
sudo mv ctop /usr/local/bin/ctop


Portainer
docker pull portainer/portainer:latest
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 –name=portainer –restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Go to http://SERVER_IP:9000/

Set a strong password

Choose Local and click Connect

Portainer: add custom repo for templates:

https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json
and click Save