Sharing graphs of multiple Munin (master) instances

Munin is a convenient monitoring tool. Even if it gets old, it is easy to set up and agrement with custom scripts.

It works with the notion of having a master munin process that will grab data from nodes (a device within the network), store it in Round-robin databases (RRD) and process the data  to generate static images and HTML pages. These sequences are split in several scripts: munin-update, munin-limits, munin-graph, munin-html.

It’s fine -overkill?- for a small local network, despite the fact RRD is a bit I/O consuming to the point it may be require to use a caching daemon like rrdcached.

It’s a different story if you want to monitor several small networks that are connected through the internet at once. Why would you? First because it might be convenient to get graphs from different networks side by side. Also because if one network disappear from the internet, data from munin might actually be meaningful, provided you can still access it.

muninex

Problem is munin updates are synchronous: any disconnect between the two would cause the data to be inconsistent. It leads  to many issues that munin-async can help with. But even though you might be able to use munin-async, one of your servers will lack a munin master: the setup will works only when both are up.

So I’m actually much more interested in having a master munin process, for each network.

How to achieve that? It is not an option to share RRD via NFS over the web. I’m also not fan of the notion of having both master munin process read through all RRD and generate graphs in parallel, re-generating exactly the same data with no value added.

I went for an alternative approach with a modified version of the munin-mergedb.pl script. We do not merge RRD trees. We simply synchronize the db files to merge and the generated graphs. So if there are graphs from another munin master process to include in the HTML output, they’ll be there. But munin master process will go undisturbed by any other process unavailability and wont have more RRD to process, more graphs to produce.

Graphs and db files replication:

On both (master munin process) hosts, you need an user dedicated to replication: here.

adduser SYNCUSER munin

This user need ssh access from one host to the other (private/public key sharing, whatever).

Directories setup:

mkdir -p /var/lib/munin-mergedb/
chown munin:munin -R /var/lib/munin-mergedb/
# the +s is very important so directory group ownership is preserved
chmod g+rws -R /var/lib/munin-mergedb/
chmod g+rws /var/lib/munin/
chmod g+rws -R /var/www/html/munin/

On one host (the one allowed to connect through ssh), synchronized two way with unison HTML files:

su - SYNCUSER --shell=/bin/bash

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22
LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST


# step one, get directories
unison -batch -auto -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML"
# step two, get directories img content 
cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR"; done

On one host (the same), synchronized one way with rsync database files:

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# push our db (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/"
# get theirs (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"

If it works fine, set up /etc/cron.d/munin-sync:

# supposed to assist munin-mergedb.pl

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22

LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# m h dom mon dow user command
# every 5 hour update dir list
01 */5 * * *  SYNCUSER unison -batch -auto -silent -log=false -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null

#  update content twice per hour
*/28 * * * *  SYNCUSER cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -silent -log=false -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null; done && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/" 2>/dev/null && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"2>/dev/null

Updated scripts:

Once data there, you will need munin-mergedb script to handle them, use a munin-cron script like my munin-cron-plus.pl instead of munin-cron so it actually calls munin-mergedb.pl. Plus you’ll need a fixed version of munin-graph so –host arguments are not blattlanly ignored (lacking RRD, it would fail to actually write graph for distant munin master process, but it would nonetheless delete existing graphs).

(Where these files go depends on your munin installation packaging. I have the munin processes in /usr/local/share/munin  and munin-cron-plus.pl in /usr/local/bin – it reflects the fact that original similar files are either in /usr/share/munin or /usr/bin. Beware, if you change the name of any munin process, update log rotation files otherwise you may easily fill up a disk drive, since it is kind of noisy especially when issues arise)

As conveniency, you can download these with my -utils-munin debian/devuan packages:

wget apt.rien.pl/stalag13-keyring.deb
dpkg -i apt.rien.pl/stalag13-keyring.deb
apt-get update
apt-get install stalag13-utils-munin

Once everything set up, you can test/debug it by typing:

su - munin --shell=/bin/bash

/usr/local/bin/munin-cron-plus.pl

What next?

Actually I’d welcome improvements munin-cron-plus.pl since it extract –host information in the most barbaric way. I am sure it can be done cleanly using Munin::Master::Config/else.

Then I’d welcome any insight about why munin-graph’s –host option does not works the way I’d like it. Maybe I misunderstand it’s exact purpose. The help reads:

 --host  Limit graphed hosts to . Multiple --host options
               may be supplied.

To me, it really means that it should not do anything at all to any files of hosts excluded this way. If it meant something else, maybe this should be explained.

Avoiding GPG issues while submitting to popularity-contest on Devuan

For some reason, on Devuan, popularity-contest submits fails with:

gpg: 4383FF7B81EEE66F: skipped: public key not found
gpg: /var/log/popularity-contest.new: encryption failed: public key not found

The Debian Popularity Contest being described as an attempt to map the usage of Debian packages, I think useful that it also get stats from disgrunted Debian users forced to use a fork of the same general scope.

I do not think it data transmitted in this context is really sensitive. So the simplest hack is just to set off encryption by adding to /etc/popularity-contest.conf:

ENCRYPT="no"

Preventing filenames with semicolon “:” to be garbled by Samba

In some cases, Samba garble file names, as backward compatibility with old Microsoft Windows system that cannot handle long filenames or filenames with specific characters. It would then be shown with the form XXXXX~X.ext

You can switch off this mecanism:

In /etc/samba/smb.conf, in [Global], add:

mangle case = no
mangled names = no

Then simply restart Samba (invoke-rc.d samba restart).

Files then will be listed with the real name. Not sure Microsoft Windows will however allow you to open the files.

Using GnuPG/PGP on multiple devices

I started using GnuPG in 2002. I dont usually do stuff that requires heavy privacy so I dont care much for it. From time to time, I just encrypt some useless crap so if anyday I had serious stuff to encrypt it would not look obviously suspicious.

Things is most of the people I communicate with are not using GnuPG and are probably not about to.

There is also an obvious issue with GnuPG is how to share key among computers/clients. How to decrypt messages with your phone or webmail? Copy the private key everywhere? It might just be worse than having no security at all.

I dont use GnuPG much, especially since I created my key in 2002 and don’t even know how secure this key is still now. I need it nonetheless to sign stuff like packages. Confronted to the problem of having to copy the key by hand on one more laptop,  I considered dropping my current set  and, inspired by this example of  primary key/subkeys model and debian’s one, to have a primary key secure somewhere and give a short-lived subkey per device.

But, it fixes not much of GnuPG problems, and implies lot of annoying not automated work, not satisfying. And anyway, if en/decrypt can only work for one subkey. So one subkey per device is not really working.

To make the process less painy, on a box being made from time to time available over network, I did as follow:

I created a primary key running gpg --expert --gen-key (cannot sign, cannot encrypt) with 4y expiry. (more entropy with rngd -r /dev/urandom).

I added with adduid and trust save the relevant additional addresses running gpg --expert --edit-key myemail

I created a sign and a encrypt subkey with no expiry (considering that they ll be revoked on the fly from the primary whenever it make sense).

I made up gpg-grabsub.sh that prompt for the hostname of the box hosting the keyring, will import the ring and remove the primary key from it, leaving just the necessary keys to sign and encrypt.

This script could probably be used in a chain (box secured from the net -> script run on gate server -> script run on a end client). It requires further testing.

 

Managing an SSH public keys ring with git

Using ssh-updatekeys, you can set up and maintain ~/.ssh/authorized_keys with specific sets on the fly.

You just have  to put your public keys on a public git repository. The script will fetch the keys, either by git + SSH (for write access) or just git + https (for read access).

It can handle different sets of keys (for instance you may want to differenciate keys with or without passphrares). In the git repository, any directory with a name starting by set (set0, setA, setTest, etc) will be treated as a set.

(ssh-updatekeys.sh is part of my -utils package).

Update : you can now grab it with the command

wget ssh.rien.pl -O ssh-updatekeys.sh

Importing CardDav (ownCloud) contacts into (SquirrelMail) .abook

I’m still using SquirrelMail, even though it looks a bit old. It’s robust and just works – and when I’m using a webmail, that’s mandatory.

SquirrelMail does not use CardDav but some sort of .abook format (that I hope is the same abook as mutt).

I just wrote carddav2abook.pl, a wrapper that requires an ~/.carddav2abookrc with the following:

carddav = https://HOST/remote.php/carddav/addressbooks/USER/contacts_shared_by_USER?export
user = USER
password = PASSWORD
abook = /var/lib/squirrelmail/data/USER.abook
wget_args = --no-check-certificate

 

As you notice, I’m using a specific export account that has been given only read access to this file. Otherwise the CardDav url would not include the _shared_by_USER part.

I configured it to directly write .abook in SquirrelMail data directly. Obviously, it means you need to adjust read write access for the relevant user (or use www-data, but I would not recommend to store password in an rcfile given to this user).

Once it works, just put up a cronjob (with 2>/dev/null since the perl vCard module tends to print some garbage).

(carddav2abook.pl is part of my -utils-webmail package).