Run Multiple Slack clients at once on your Mac

This doesn’t work anymore. It just redirects back to the original again. Crap.

Slack’s interface is great for minimalists or people who have 3 channels to keep track of. When you have 200+ channels and dozens of DM’s to keep track of, it’s horrendous! You can get Slack open in a browser, but multiple tabs tend to collapse back into whatever you touched last. You can start opening in-cognito windows but that’s a hassle and doesn’t work beyond 2 anyway. Why don’t they just allow tabs or multiple windows at once? Outlook hasn’t gotten it right yet, so why should Slack.

In the meantime, here’s how:

Open Finder

Find Slack

Right click, Copy

Right click in a blank spot, Paste

Rename it to “Slack 2” or whatever

Lather rinse repeat, I have 4 available

Run that app, the Mac has to treat it as a separate app, but it knows how to get connected so you can have multiples open at once!

Notifications will drive you nuts! Be warned. Other than that, it’s a memory hog, but what else is new.

Remix: Ubuntu 20.04 LTS Server + Docker + Portainer + App Repository

Build minimal Ubuntu 20.04 server, add ONLY OpenSSH during installation.
Watch your disk partitions! You will be using a lot (eventually), the majority will end up under /var, so crank that up, or go with one partition, just be warned.

After the base install:

ssh to box:

# I'm bad, I do it all as root
sudo su - 
apt update
apt upgrade

Install Docker:

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent 

sudo apt-get install software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository  "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

sudo apt -y install mc iperf3 iptraf-ng

sudo docker run hello-world

Install Portainer

docker run -d --name portainer --restart unless-stopped -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Links

# Portainer is on:
http://SERVER_IP:9000/

You may have to reboot the host if this is the first time through!

Beware that firewalld and iptables are all in this mess! You may have to enable ports or disable the firewall on the LAN side to get your containers to work.

If you can’t build a new container or can’t connect, reboot the box at least once before freaking out.

Portainer templates

  • Open Portainer (http://SERVER_IP:9000/)
  • Double click on your host
  • Go to Settings
  • Go to App Templates
  • Select “Use External Templates”
  • Paste in:
  • https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json
  • Click “Save Settings”
  • Go to “App Templates” in the blue bar menu
  • Turn on “Show Container Templates”

Useful scripts for your Docker host:

# docker-cleanup.sh
date
df -h
docker image prune -a
docker container prune
docker system prune
date
df -h
# dockerlogs.sh
#!/bin/bash
DARKGRAY='\033[1;30m'
LIGHTRED='\033[1;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
LIGHTPURPLE='\033[1;35m'
CYAN='\033[0;36m'

COLORS=($DARKGRAY $LIGHTRED $GREEN $YELLOW $BLUE $LIGHTPURPLE $CYAN )
color_stop=$(printf '\033[0m')
size=${#COLORS[@]}


names=$(docker ps --format "{{.Names}}")
echo "tailing $names"

while read -r name
do
  index=$(($RANDOM % $size))
  color_start=$(printf ${COLORS[$index]})

  # eval to show container name in jobs list
  eval "docker logs -f --tail=5 \"$name\" | sed -e \"s/^/${color_start}[-- $name --]${color_stop} /\" &"
done <<< "$names"

function _exit {
  echo
  echo "Stopping tails $(jobs -p | tr '\n' ' ')"
  echo "..."

  # Using `sh -c` so that if some have exited, that error will
  # not prevent further tails from being killed.
  jobs -p | tr '\n' ' ' | xargs -I % sh -c "kill % || true"

  echo "Done"
}

# On ctrl+c, kill all tails started by this script.
trap _exit EXIT

# Don't exit this script until ctrl+c or all tails exit.
wait

NGINX Proxy Manager (aka Build your own F5)

Lots of ways to do this, but this is pretty easy:

Install portainer
Configure new repo for templates:
https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json

Install NGINX Proxy Manager, forward ports: 443:4443, 80:8080 and MAYBE 81:8181
Mgmt port is 81, so keep that protected as needed depending on where you built it.

Default login is:  admin@example.com Password: changeme
Change that immediately.

Saves configs to:
/portainer/Files/AppData/Config/Nginx-Proxy /config

Lets say you want to set up Sickgear on http://192.168.0.55:8081/ to respond to sickgear.mydomain.com

  • Add DNS alias for sickgear.mydomain.com
  • Wait for it to resolve
  • Log into your NGINX Proxy Manager
  • Hosts – Proxy hosts
  • Click the “Add Proxy Host” button
  • Fill in sickrage.mydomain.com in the domain name
  • The Scheme, Forward hostname/IP and port are all dependent on the backend, so our “http://192.168.0.55:8081/” gets split up into http, 192.168.0.55, and 8081
  • You’ll have to decide if you need caching or websockets support, you can likely turn them all on and see if it works properly
  • Access lists can be applied, but have to be set up before you can use them, you can come back to this.
  • if you are stacking services on top of a single domain name, then custom locations is where you split out /path to http://otherhost:1234, we aren’t doing that right now
  • If you need an SSL cert, go to the SSL tab and choose “Request a New SSL Certificate”, you will need your email address that is associated with your Lets Encrypt account, and click the I Agree to the TOS. The defaults are fine, you can tighten things up if you need to later.
  • Click Save
  • Usually the cert and forwarding is ready to go in under a minute if you have things forwarded properly (generally 80 and 443 to the NGINX Proxy Manager is all you need)

You should now be able to hit your https://sickrage.mydomain.com/ and have a valid SSL certificate.

Building a single IP Docker/NPM/Web host

Assumes you have a single public IP (VPS, DO, whatever) and need to lock down to just specified ports and allow NPM to work internally.

Started with Ubuntu 20.04 server, but could be CentOS or RHEL or whatever.

Selected nothing on install, then installed docker like so:

apt-get install apt-transport-https ca-certificates curl gnupg-agent

apt-get install software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

apt-get update

apt-get install docker-ce docker-ce-cli containerd.io

apt -y install mc iperf3 iptraf-ng

Start portainer

docker run -d --name portainer --restart unless-stopped -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Fixing UFW

vim /etc/ufw/ufw.conf
(change enabled to yes, save and exit)

systemctl start ufw
systemctl enable ufw

ufw allow ssh
ufw reload

This is a workaround to make ufw do what you expect it to do (block ports by default and allow the ports you want) because it conflicts with Docker networking.

This is directly from here: https://github.com/chaifeng/ufw-docker

Edit the ufw rules by running this:
vim /etc/ufw/after.rules

And after this section:
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT


Add this new section:

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:ufw-docker-logging-deny - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16

-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN

-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 172.16.0.0/12

-A DOCKER-USER -j RETURN

-A ufw-docker-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW DOCKER BLOCK] "
-A ufw-docker-logging-deny -j DROP

COMMIT
# END UFW AND DOCKER

Save and exit :wq!

Restart ufw:

systemctl restart ufw
ufw reload
ufw allow ssh
ufw reload

This rule set is slightly weird in that to allow ports inbound, say you started NPM on ports 8080 and 4443 (the container ports, not the host mapped ports) you would need to run these commands to let ufw allow communications on the docker network:

ufw route allow proto tcp from any to any port 8080
ufw route allow proto tcp from any to any port 4443
ufw reload

This is enough if you are going to use npm to front other services you load onto the docker host.

( I set up portainer, custom repo for templates, Nginx Proxy Manager, and added a dns record to resolve properly, added a proxy host in NPM to send that domain name to the box’s IP and portainer port (172.17.0.2:9000) used my own setup guides on this site)

Read that again! It’s important that you use the internal IP within the NPM setup to route traffic correctly.

I also do not have the portainer or NPM admin utilities exposed by default, I use ssh port forwarding to keep them off the internet:

ssh -i “~/.ssh/id_rsa_do” -L 9000:127.0.0.1:9000 -L 8181:127.0.0.1:81 root@my.ip.add.ress

This minimizes the ports to 22, 80 and 443.

Cite your sources…
https://www.linode.com/docs/guides/configure-firewall-with-ufw/
https://stackoverflow.com/questions/30383845/what-is-the-best-practice-of-docker-ufw-under-ubuntu
https://superuser.com/questions/590600/ufw-is-active-but-not-enabled-why

Redhat Enterprise Linux – For Free!

Starting at the beginning of February 2021, Redhat developer accounts (free) can install up to 16 Redhat Enterprise Linux (RHEL) servers, physical, virtual or hypervisors in any combination.

Log into redhat.com, make a new account if needed, by default, this is what you get. No muss, no fuss.

16 Servers, I need 1, maybe 2 physicals and a virtual or two. Cool.

Download RHEL 8 install iso: https://access.redhat.com/products/red-hat-enterprise-linux/

Install as usual, no trick there. Use Ventoy on USB for the easiest install.

Once it’s up, run:
subscription-manager register
Give it your login and password when prompted.

subscription-manager attach
(This can take a minute)

yum update
yum should work and you’re on the production repo now!

Done!

Redhat management links:

Dashboard: https://access.redhat.com/management

Details: https://access.redhat.com/management/subscriptions/product/RH00798

Systems using this license (i.e. how many have I used): https://access.redhat.com/management/subscriptions/product/RH00798/systems

You can remove a system if it’s no longer in service and get the license back: https://access.redhat.com/management/systems/

You can use RHEL now in place of where you used CentOS in the past, and the migration path to full blown RHEL is a phone call to buy the license. If you don’t need the support/license, then you need do nothing else. No one will call. They won’t hassle you to buy anything ever.

This lets you run RHEL like the big boys for free and it’s completely legit!

RHEL 8 + libvirt + docker + portainer

Build minimal install box of RHEL 8. You can choose Virtual Host and Headless administration and it covers most of this right off the bat, it will just skip them in the yum commands.

Watch your disk partitions! Changing storage after the fact isn’t terrible, but if you need to change storage, follow this: https://linuxconfig.org/configure-default-kvm-virtual-storage-on-redhat-linux#h6-1-create-new-virtual-storage-directory

This is RHEL, so register it with your free developer account (16 Free RHEL servers? Yes, thank you!)

SSH should be enabled and running:

sudo yum install openssh-server
sudo systemctl enable sshd
sudo systemctl start sshd
sudo systemctl status sshd
ssh to box:
yum -y update
yum -y install cockpit cockpit-machines
yum -y install qemu-kvm libvirt libguestfs-tools virt-install
yum -y install mc iperf3 iptraf-ng
yum -y install virt-install virt-viewer
systemctl start cockpit.socket
systemctl enable cockpit.socket
systemctl status cockpit.socket
firewall-cmd –add-service=cockpit –permanent
firewall-cmd –reload
# Cockpit is on: https://SERVER_IP:9090/

modprobe fuse
virt-host-validate
systemctl start libvirtd.service
systemctl enable libvirtd.service
systemctl status libvirtd.service

If needed, set up network bridge:
Cockpit – Networking – Add Bridge
Select your ethernet card (em1)

Should now be able to build virtual machines from within cockpit. You might need to reload the page or go out and back in for it to figure out libvirt is enabled.

If you can’t build a new VM, reboot the box at least once before freaking out.
ISO’s to install from:
ISOs should live in /var/lib/libvirt/images/ probably want to mount that as a read only nfs. If you mount iso’s somewhere else you can have permission issues that are tough to get around. Even when mounted in the above dir, I had issues selecting the OS when choosing the ISO, try picking the OS first if it won’t let you pick it after the iso selection.
mount:
go to cockpit – storage and add the NFS or SMB share, stupid easy.

Netdata:

yum update
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
#(wait) (answer any prompts)
systemctl start netdata
systemctl enable netdata

# Add to IPTables via firewall-cmd:
firewall-cmd --zone=public --permanent --add-port=19999/tcpservice firewalld reload
http://SERVER_IP:19999/
Done!

Docker:

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache
yum remove buildah podman containerd runc
yum -y install docker-ce
systemctl enable --now docker
systemctl status docker
usermod -aG docker $USER
docker version
docker pull alpine
docker images
docker run -it --rm alpine /bin/sh

exit


Docker compose:

curl -s https://api.github.com/repos/docker/compose/releases/latest \
  | grep browser_download_url \
  | grep docker-compose-Linux-x86_64 \
  | cut -d '"' -f 4 \
  | wget -qi -

chmod +x docker-compose-Linux-x86_64
mv docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
docker-compose version


CTop

export VER="0.7.3"
wget https://github.com/bcicen/ctop/releases/download/v${VER}/ctop-${VER}-linux-amd64 -O ctop
chmod +x ctop
sudo mv ctop /usr/local/bin/ctop


Portainer
docker pull portainer/portainer:latest
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 –name=portainer –restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Go to http://SERVER_IP:9000/

Set a strong password

Choose Local and click Connect

Portainer: add custom repo for templates:

https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json
and click Save

Ubuntu 20.04.1 LTS Libvirt + Cockpit + Docker + Portainer

Build minimal install box, add OpenSSH during installation
Watch your disk partitions! You will be using a lot (eventually)

Base install:

ssh to box:

# I'm bad, I do it all as root
sudo su - 
apt update
apt upgrade

libVirt install:

# libVirt install
apt install cpu-checker
apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virtinst virt-manager
systemctl is-active libvirtd
# should output "active"
usermod -aG libvirt $USER
usermod -aG kvm $USER
exit
# Do this as your user also
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
sudo brctl show

Cockpit:

sudo su - 
apt install cockpit -y
systemctl start cockpit
ss -tunlp | grep 9090
ufw allow 9090/tcp
apt install cockpit-machines cockpit-storaged cockpit-packagekit cockpit-networkmanager cockpit-dashboard cockpit-bridge
# Cockpit should now be available on https://ip:9090

Docker:

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent 

sudo apt-get install software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository  "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

sudo apt -y install mc iperf3 iptraf-ng

sudo docker run hello-world

Portainer

docker run -d --name portainer --restart unless-stopped -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

NetData

apt install curl -y
bash <(curl -Ss https://my-netdata.io/kickstart.sh)

Links

# Cockpit is on:
https://SERVER_IP:9090/

# Portainer is on:
http://SERVER_IP:9000/

# Netdata is on
http://SERVER_IP:19999

This was needed in CentOS 8, not sure about Ubuntu 20.04 yet:

Set up network bridge:
Cockpit - Networking - Add Bridge
Select your ethernet card (em1) *(ymmv)
Name - bridge0
Reboot. Really. This may save some headaches later.


For desktop OS type guests, you should be good to go, for servers you have to use a bridge to get on the right network.

Assuming you added the bridge above (Cockpit – Network – Add Bridge)

Virtual machines – VM – Networking – Interface Type – Bridge to LAN, Source – bridge0, e1000e or virt, whatever.

Restart the VM, it should appear on your network. You may have to reboot the host if this is the first time through!

Beware that firewalld and iptables are all in this mess! You may have to enable ports or disable the firewall on the LAN side to get your server to work.

If you can’t build a new VM or can’t connect, reboot the box at least once before freaking out.

ISO’s to install from

ISOs should live in /var/lib/libvirt/images/ probably want to mount that as a read only nfs mount to your NAS:

Go to cockpit – storage and add the NFS share to that path, stupid easy.

Portainer templates

  • Open Portainer (http://SERVER_IP:9000/)
  • Go to Settings
  • Go to App Templates
  • Select “Use External Templates”
  • Paste in:
  • https://raw.githubusercontent.com/SelfhostedPro/selfhosted_templates/master/Template/template.json
  • Click “Save Settings”
  • Go to “App Templates” in the blue bar menu
  • Turn on “Show Container Templates”

Storage changes

If you need to change storage locations for libVirt, follow this: https://linuxconfig.org/configure-default-kvm-virtual-storage-on-redhat-linux#h6-1-create-new-virtual-storage-directory

Kali and Openvas GVM Setup

Cheat sheet:

# Start
sudo gvm-start
# Stop
sudo gvm-stop
# Update the feed
sudo gvm-feed-update

Hit the web UI at: https://your.ip.add.ress:9392/

Build-out:
Make a new Kali machine on libvirt VM, lxc, lxd, proxmox, whatever, just not docker (for Kali or Openvas, too many updates that get eaten/lost/etc. and OpenVas is HUGE). 

sudo su - 
apt update
apt upgrade
systemctl enable ssh.service


apt install openvas
apt install gvm
gvm-setup

(If it fails with ERROR: The default postgresql version is not 13 required by libgvmd) See fix below.

gvm-setup
(should work now and update the feed etc. takes a LONG time just let it go) GET THE PASSWORD AT THE END!

Make it listen on on more than 127.0.0.1:
gvm-stop
sed -ibak -e 's/127.0.0.1/0.0.0.0/g' /lib/systemd/system/greenbone-security-assistant.service
sed -ibak -e 's/127.0.0.1/0.0.0.0/g' /etc/default/greenbone-security-assistant
gvm-start
(Ignore the 127.0.0.1 there, it’s just in the script)
sudo gvm-feed-update
 to update the feed only
sudo gvm-start/stop 
to start or stop the service

Hit the web UI at: https://your.ip.add.ress:9392/
admin and the ong password it generated at the end of the setup

gvm-setup fix: edit:
vi /etc/postgresql/12/main/postgresql.conf
vi /etc/postgresql/13/main/postgresql.conf

Look for port = in both

Make v12 5433
Make v13 5432
systemctl restart postgresql

We aren’t using postgres for anything else here, so not being very careful with it
OR
FIX:
list your clusters
pg_lsclusters
and drop all clusters besides 13 cluster.
eg:
sudo pg_dropcluster 12 main –stop

Libvirt importing a virtual appliance

This directly from here: https://www.redhat.com/en/blog/importing-vms-kvm-virt-v2v

yum update

yum install virt-v2v

virsh pool-list
mkdir /var/lib/libvirt/Appliances
virsh pool-define-as Appliances –type dir –target /var/lib/libvirt/Appliances
virsh pool-start Appliances
virsh pool-autostart Appliances
virsh pool-list

Then import an OVA file:

virt-v2v -i ova /root/third_party_appliance.ova -o libvirt -of qcow2 -os Appliances -n default

Example: virt-v2v -i ova /home/brad/Nextcloud_VM_www.hanssonit.se.ova -o libvirt -of qcow2 -os Appliances -n default

If you get something like this:

[root@centos8 ~]# virt-v2v -i ova /home/brad/Nextcloud_VM_www.hanssonit.se.ova -o libvirt -of qcow2 -os Appliances -n default
[ 0.0] Opening the source -i ova /home/brad/Nextcloud_VM_www.hanssonit.se.ova
virt-v2v: warning: making OVA directory public readable to work around
libvirt bug https://bugzilla.redhat.com/1045069
[ 8.8] Creating an overlay to protect the source from being modified
[ 9.0] Initializing the target -o libvirt -os Appliances
[ 9.0] Opening the overlay
virt-v2v: error: libguestfs error: could not create appliance through
libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: Cannot access backing file
‘/home/brad/Nextcloud_VM_www.hanssonit.se.ova’ of storage file
‘/var/tmp/v2vovl2af2bf.qcow2’ (as uid:107, gid:107): Permission denied
If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:

virt-v2v -v -x […]

You work around this by running:
export LIBGUESTFS_BACKEND=direct
then do the virt-v2v again and it should work.

Now start your imported appliance:

[root@centos8 ~]# virsh list –all
Id Name State
1 kali running
Nextcloud_VM_www.hanssonit.se shut off

[root@centos8 ~]# virsh start Nextcloud_VM_www.hanssonit.se
error: Failed to start domain Nextcloud_VM_www.hanssonit.se
error: Cannot get interface MTU on ‘bridged’: No such device

You probably need to edit the network card to match your other VM’s