Sharing graphs of multiple Munin (master) instances

Munin is a convenient monitoring tool. Even if it gets old, it is easy to set up and agrement with custom scripts.

It works with the notion of having a master munin process that will grab data from nodes (a device within the network), store it in Round-robin databases (RRD) and process the data  to generate static images and HTML pages. These sequences are split in several scripts: munin-update, munin-limits, munin-graph, munin-html.

It’s fine -overkill?- for a small local network, despite the fact RRD is a bit I/O consuming to the point it may be require to use a caching daemon like rrdcached.

It’s a different story if you want to monitor several small networks that are connected through the internet at once. Why would you? First because it might be convenient to get graphs from different networks side by side. Also because if one network disappear from the internet, data from munin might actually be meaningful, provided you can still access it.

muninex

Problem is munin updates are synchronous: any disconnect between the two would cause the data to be inconsistent. It leads  to many issues that munin-async can help with. But even though you might be able to use munin-async, one of your servers will lack a munin master: the setup will works only when both are up.

So I’m actually much more interested in having a master munin process, for each network.

How to achieve that? It is not an option to share RRD via NFS over the web. I’m also not fan of the notion of having both master munin process read through all RRD and generate graphs in parallel, re-generating exactly the same data with no value added.

I went for an alternative approach with a modified version of the munin-mergedb.pl script. We do not merge RRD trees. We simply synchronize the db files to merge and the generated graphs. So if there are graphs from another munin master process to include in the HTML output, they’ll be there. But munin master process will go undisturbed by any other process unavailability and wont have more RRD to process, more graphs to produce.

Graphs and db files replication:

On both (master munin process) hosts, you need an user dedicated to replication: here.

adduser SYNCUSER munin

This user need ssh access from one host to the other (private/public key sharing, whatever).

Directories setup:

mkdir -p /var/lib/munin-mergedb/
chown munin:munin -R /var/lib/munin-mergedb/
# the +s is very important so directory group ownership is preserved
chmod g+rws -R /var/lib/munin-mergedb/
chmod g+rws /var/lib/munin/
chmod g+rws -R /var/www/html/munin/

On one host (the one allowed to connect through ssh), synchronized two way with unison HTML files:

su - SYNCUSER --shell=/bin/bash

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22
LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST


# step one, get directories
unison -batch -auto -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML"
# step two, get directories img content 
cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR"; done

On one host (the same), synchronized one way with rsync database files:

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# push our db (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/"
# get theirs (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"

If it works fine, set up /etc/cron.d/munin-sync:

# supposed to assist munin-mergedb.pl

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22

LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# m h dom mon dow user command
# every 5 hour update dir list
01 */5 * * *  SYNCUSER unison -batch -auto -silent -log=false -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null

#  update content twice per hour
*/28 * * * *  SYNCUSER cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -silent -log=false -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null; done && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/" 2>/dev/null && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"2>/dev/null

Updated scripts:

Once data there, you will need munin-mergedb script to handle them, use a munin-cron script like my munin-cron-plus.pl instead of munin-cron so it actually calls munin-mergedb.pl. Plus you’ll need a fixed version of munin-graph so –host arguments are not blattlanly ignored (lacking RRD, it would fail to actually write graph for distant munin master process, but it would nonetheless delete existing graphs).

(Where these files go depends on your munin installation packaging. I have the munin processes in /usr/local/share/munin  and munin-cron-plus.pl in /usr/local/bin – it reflects the fact that original similar files are either in /usr/share/munin or /usr/bin. Beware, if you change the name of any munin process, update log rotation files otherwise you may easily fill up a disk drive, since it is kind of noisy especially when issues arise)

As conveniency, you can download these with my -utils-munin debian/devuan packages:

wget apt.rien.pl/stalag13-keyring.deb
dpkg -i apt.rien.pl/stalag13-keyring.deb
apt-get update
apt-get install stalag13-utils-munin

Once everything set up, you can test/debug it by typing:

su - munin --shell=/bin/bash

/usr/local/bin/munin-cron-plus.pl

What next?

Actually I’d welcome improvements munin-cron-plus.pl since it extract –host information in the most barbaric way. I am sure it can be done cleanly using Munin::Master::Config/else.

Then I’d welcome any insight about why munin-graph’s –host option does not works the way I’d like it. Maybe I misunderstand it’s exact purpose. The help reads:

 --host  Limit graphed hosts to . Multiple --host options
               may be supplied.

To me, it really means that it should not do anything at all to any files of hosts excluded this way. If it meant something else, maybe this should be explained.

Replicating IMAPs (dovecot) mails folders and sharing (through ownCloud) contacts (kmail, roundcube, etc)

dual IMAPs servers:

Having your own server handling your mails is enabling -you can implement anti-spam policies harsh enough to be incredibly effective, place catch-alls temporary addresses, etc. It does not even require much maintainance these days, it just takes a little time to set it up.

One drawback, though, is the fact if your host is down, or simply its link, then you are virtually unreachable. So you want a backup server. The straightforward solution is to have a backup that will simply forward everything to the main server as soon as possible. But having a backup server that is a replica of the main server allows you to use one or the other indifferently, and definitely have always one up at hand.

In my case, I run exim along with dovecot.  So once exim setup is replicated,  it’s only a matter of making sure to have proper dovecot setup (in my case mail_location = maildir:~/.Maildir:LAYOUT=fs:INBOX=~/.Maildir/INBOX
and mail_privileged_group =   mail  set in /etc/dovecot/conf.d/10-mail.conf along with ssl = required in /etc/dovecot/conf.d/10-ssl.conf  – you obviously need to create a certificate for IMAPs, named as described in said 10-ssl.conf but that’s not the topic here, you can use only IMAP if you wish).

Then, for each user account (assuming we’re talking about a low number), it’s as simple as making sure SSH access with no passphrase can be achieved from one of the hosts to the other and adding a cronjob like:

# */2 * * * *     user   dsync -f mirror secondary.domain.net 2> /dev/null
*/2 * * * *     user   isync --all --create-remote --quiet 2>/dev/null
*/2 * * * *     user   mbsync --all --quiet 2>/dev/null
*/2 * * * *     user   pgrep -x "offlineimap" -u user > /dev/null || offlineimap -u quiet >/dev/null 2>/dev/null

offlineimap requires a ~/.offlineimaprc such as:

[general]
accounts = mx

[Account mx]
localrepository = mx1
remoterepository = mx2
autorefresh = 2

[Repository mx1]
type = Maildir
localfolders = ~/Maildir

[Repository mx2]
type = IMAP
ipv6 = False
preauthtunnel = ssh -q secondary.domain.net '/usr/lib/dovecot/imap'

The first run may be a bit slow but it goes very fast afterward (I do have a strict expire policy though, it probably helps). This isdone the the primitive  way, recent version of dovecot (ie: not yet in Debian stable) provides plugins to do it.

You may as well install unison on both server and synchronize things like ~/.procmailrc, /etc/aliases or whatever, for instance:

8 */2 * * *	user	unison -batch -auto -silent -log=false ~/.procmailrc ssh://secondary.domain.net//home/user/.procmailrc 2> /dev/null

Once you checked that you can properly login on both IMAPs, it’s just a matter of configuring your mail clients.

and many mail clients:

I use roundcube webmail whenever I have no access to a decent system with a proper mail client (kmail, gnus, etc) configured. With two IMAPs servers, there’s no benefit of not having the same webmail setup on both.

The only annoying thing is not to have common address book. It’s possible to replicate the roundcube database but it’s even better to have a cloud to share the address book with any client, not doing some rouncube-specific crap. So I went for the option of installing ownCloud on one of the hosts (so far I’ve not decided yet if there is a point in replicating also the cloud, seems a bit overkill to replicate data that is already some sort of backup or replica), pretty straight-forward since I already have nginx and php-fcgi running. And then if was just a matter of pluging roundcube in ownCloud through CardDav.

Once done, you may just want to also plug your ownCloud calendar/addressbook in KDE etc, so all your mail clients will share the same address book (yeah!). Completely unrelated, add mozilla_sync to your ownCloud is worth it too.

The only thing so far that miss is the replication of your own identities – I haven’t found anything clear about that but havent looked into it seriously. I guess it’s possible to put ~/.kde/share/config/emailidentities on the cloud or use it to extract identities vcard but I’m not sure a dirty hack is worth it. It’s a pity that identities are not part of the addressbook.

(The alternative I was contemplating before was to use kolab; I needed ownCloud for other matters so I went for this option but I keep kolab in mind nonetheless)

Update 1: Stop using dsync that is tremendously unreliable as of today, use isync instead.

Update 2: Switch to mbsync, since isync was a wrapper that mbsync author recommends not to use anymore.

Update 3: Switch to offlineimap because I do not understand mbsync behavior, ignoring INBOX, etc. I cannot find a way to configure it so it works.