Revision control and distribution of home configuration files with Bash and git

For year, I managed different copies of home configuration files over different hosts with some revision control,  but however better are modern system like git in comparison to old CVS, it would still be quite unpractical to put your whole home directory within one single repository:

  • for obvious reason, there are only a few files that you can actually move around carelessly and put on gitlab for instance; but these files are actually nice to have there, so you can retrieve them whenever and wherever you want;
  • even if you could/would made the rest of your home directory public, most of the configuration files cannot adjust to each host they are run onto; you can obviously adjust a ~/.bashrc according to $HOSTNAME, but it gets a bit more annoying for, say, ~/.Xdefaults;

I am quite sure most people using many different hosts have all their own way to deal with that. There are too many use cases for one solution to be practical for everybody.

I already made public a small script to distributed SSH public keys, that I was using for quite a while before. Next is the script I am using now to distribute home configuration files among hosts: it needs to be added within a git repository (in my case, gitlab “rc” repository), from there, based on a pre-decided list of files or directories:

  • keep a copy of each file/directory per hostname (ex: bashrc.$HOSTNAME, config/awesome.$HOSTNAME);
  • default can be set by renaming $item.$HOSTNAME to $item.default of such file/directory (ex: bashrc.default);

It obeys to the following general rules:

  • it wont copy symlinks but their content;
  • if we only have a local file, save it in the repository;
  • if we have a local file and a repository copy, and if there is a difference, update the repository;
  • if we only have a repository copy, no local file, create the local file with a warning;

Regarding $item.default:

  • $item.default  be will used only unless a $item.$HOSTNAME exists;
  • $item.default will never be updated automatically: if the local copy based on the default is modified, then a $item.$HOSTNAME will be created instead; if it is to made default, you’ll need to rename $item.$HOSTNAME to $item.default; alternatively, you could edit $item.default first and remove the local file at once;
  • similarly, $item.default will never overwrite a local file: to use it on other hosts after an update, the local file will need to be removed;

I admit this $item.default handling is a bit cumbersome but these files update presents risk (lockout, security, etc).

If updaterc exists in the same directory, it will be sourced. It is convenient way to change the $ITEMS variable without editing the script itself.

To use it, you just need to set up and clone some git repository and, within this repository:

wget https://gitlab.com/yeupou/rc/raw/master/update.sh
chmod +x update.sh
# eventually create a custom list of items
echo 'ITEMS="bashrc config/awesome"' > updaterc
# run
./update.sh

The task can be automated by a cronjob, add the following to a call to crontab -e:

3 12 * * * ~/.rc/update.sh >/dev/null 2>/dev/null

(side note: that won’t work properly if one of your hosts is named “default”)

Using networked filesystems hosted by LXC containers with Samba

For more than a decade, I used NFS on my home server to share files. I did not consider using Samba for anything but to provide Windows access to shares. NFSv3 then NFSv4 suited me, allowing per host/IP write access policy. The only main drawback was very crude handling of NFS server downtime: X sessions would be half-frozen, requiring restart to be usable once again.

However, I moved recently my servers to LXC (which I’ll probably document a bit later) and NFS server on Debian, as you can guess from nfs-kernel-server package’s name, is kernel-based: not only it apparently defeats the purpose of LXC containers to actually have a server within a container tied to the kernel, but it does not seems to really work reliably. I managed to get it running, but it had to be run on both the master host and within the container. Even then, depending which started first could make the shares unavailable to hosts.

I checked a few articles over the web (https://superuser.com/questions/515080/alternative-to-nfs-or-better-configuration-instable-network-simple-to-set-up, http://serverfault.com/questions/372151/nas-performance-nfs-vs-samba-vs-glusterfs etc) and it looked that, as of today, you can expect decent performances from Samba, as much as of NFS. That could possibly be proven wrong if I was using massively NFS, writing a lot through networked file systems, opening a big number of files simultaneously, moving big files around a lot, but I have really simple requirements: no latency when browsing directories, no latency when playing 720p/1080p videos and that’s about it.

I had already a restricted write access directory per user, via Samba, but I use it only on lame systems as temporary area: on proper systems, I use SSH/scp/rsync/git to manipulate/save files.

Dropping NFS, I have now quite a simple setup, here are relevant parts of my /etc/samba/smb.conf:

[global]

## Browsing/Identification ###

# What naming service and in what order should we use to resolve host names
# to IP addresses
 name resolve order = lmhosts host wins bcast


#### Networking ####

# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
 interfaces = eth0

# Only bind to the named interfaces and/or networks; you must use the
# 'interfaces' option above to use this.
# It is recommended that you enable this feature if your Samba machine is
# not protected by a firewall or is a firewall itself. However, this
# option cannot handle dynamic or non-broadcast interfaces correctly.
 bind interfaces only = true


#### File names ####

# remove characters forbidden on Windows
mangled names = no

# charsets
dos charset = iso8859-15
unix charset = UTF8


####### Authentication #######

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
 security = user

# Private network
 hosts allow = 192.168.1.


# You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
 encrypt passwords = true

# If you are using encrypted passwords, Samba will need to know what
# password database type you are using. 
 passdb backend = tdbsam

obey pam restrictions = yes

guest account = nobody
 invalid users = root bin daemon adm sync shutdown halt mail news uucp operator www-data sshd Debian-exim debian-transmission
 map to guest = bad user

# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
 unix password sync = yes


#======================= Share Definitions =======================


realm = ...


[commun]
comment = Commun
path = /srv/common
browseable = yes
writable = yes
public = yes
guest ok = yes
valid users = @smbusers
force group = smbusers
create mode = 0660
directory mode = 0770
force create mode = 0660
force directory mode = 0770

[tmpthisuser]
comment = Données protégées
path = /srv/users/thisuser
browseable = yes
writable = yes
public = yes
valid users = thisuser
create mode = 0600
directory mode = 0700
force create mode = 0600
force directory mode = 0700
guest ok = no

 

I installed package libpam-smbpass and edited /etc/pam.d/samba as follow:

@include common-auth
@include common-account
@include common-session-noninteractive
@include common-password

For this setup to work, you need every user allowed to connect:

  • to be member of group smbusers – including nobody (or whatever the guest account is) ;
  • to have a unix password set ;
  • to be known to samba (smbpasswd -e thisuser or option -a).

If you are not interested in per user access restricted area, only nobody account will need to be taken care of.

And, obviously, files and directories ownership and modes must be set accordingly:

cd /srv/common
# (0770/drwxrwx---) GID : (nnnnn/smbusers)
find . -type d -print0 | xargs -0 chmod 770 -v
find . -type f -print0 | xargs -0 chmod 660 -v
cd /srv/users
# (0700/drwx------) UID : ( nnnn/ thisuser) GID : ( nnnn/ thisuser)
find . -type d -print0 | xargs -0 chmod 700 -v
find . -type f -print0 | xargs -0 chmod 600 -v
# main directories, in addition, need sticky bit some future directory get proper modes
chmod 2770 /srv/common/*
chmod 2700 /srv/users/*

To access this transparently over GNU/Linux systems, just add in /etc/fstab:

//servername/commun /mountpoint cifs guest,uid=nobody,gid=users,iocharset=utf8 0 0

This assumes that any users entitled to access files belongs to users group. If not, update accordingly.

With this setup, there is no longer any IP based specific write access set but, over years, I found out it was quite useless for my setup.

The only issue I have is with files with colon within  (“:”). Due to MS Windows limitations, CIFS list these files but access is made impossible. The easier fix I found was to actually rename these files (not a problem due to the nature of the files served) through a cronjob /etc/cron.hourly/uncolon :

#!/bin/bash
# a permanent cifs based fix would be welcomed
find "/srv" -name '*:*' -exec rename 's/://g' {} +

but I’d be interested in better options.

 

 

Sharing graphs of multiple Munin (master) instances

Munin is a convenient monitoring tool. Even if it gets old, it is easy to set up and agrement with custom scripts.

It works with the notion of having a master munin process that will grab data from nodes (a device within the network), store it in Round-robin databases (RRD) and process the data  to generate static images and HTML pages. These sequences are split in several scripts: munin-update, munin-limits, munin-graph, munin-html.

It’s fine -overkill?- for a small local network, despite the fact RRD is a bit I/O consuming to the point it may be require to use a caching daemon like rrdcached.

It’s a different story if you want to monitor several small networks that are connected through the internet at once. Why would you? First because it might be convenient to get graphs from different networks side by side. Also because if one network disappear from the internet, data from munin might actually be meaningful, provided you can still access it.

muninex

Problem is munin updates are synchronous: any disconnect between the two would cause the data to be inconsistent. It leads  to many issues that munin-async can help with. But even though you might be able to use munin-async, one of your servers will lack a munin master: the setup will works only when both are up.

So I’m actually much more interested in having a master munin process, for each network.

How to achieve that? It is not an option to share RRD via NFS over the web. I’m also not fan of the notion of having both master munin process read through all RRD and generate graphs in parallel, re-generating exactly the same data with no value added.

I went for an alternative approach with a modified version of the munin-mergedb.pl script. We do not merge RRD trees. We simply synchronize the db files to merge and the generated graphs. So if there are graphs from another munin master process to include in the HTML output, they’ll be there. But munin master process will go undisturbed by any other process unavailability and wont have more RRD to process, more graphs to produce.

Graphs and db files replication:

On both (master munin process) hosts, you need an user dedicated to replication: here.

adduser SYNCUSER munin

This user need ssh access from one host to the other (private/public key sharing, whatever).

Directories setup:

mkdir -p /var/lib/munin-mergedb/
chown munin:munin -R /var/lib/munin-mergedb/
# the +s is very important so directory group ownership is preserved
chmod g+rws -R /var/lib/munin-mergedb/
chmod g+rws /var/lib/munin/
chmod g+rws -R /var/www/html/munin/

On one host (the one allowed to connect through ssh), synchronized two way with unison HTML files:

su - SYNCUSER --shell=/bin/bash

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22
LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST


# step one, get directories
unison -batch -auto -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML"
# step two, get directories img content 
cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR"; done

On one host (the same), synchronized one way with rsync database files:

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# push our db (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/"
# get theirs (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"

If it works fine, set up /etc/cron.d/munin-sync:

# supposed to assist munin-mergedb.pl

DISTANT_HOST=DISTANTHOST
DISTANT_PORT=22

LOCAL_HTML=/var/www/html/munin/DOMAIN
DISTANT_HTML=/var/www/html/munin/DOMAIN

LOCAL_DB=/var/lib/munin
DISTANT_LOCAL_DB=/var/lib/munin-mergedb/THISHOST
LOCAL_DISTANT_DB=/var/lib/munin-mergedb/DISTANTHOST

# m h dom mon dow user command
# every 5 hour update dir list
01 */5 * * *  SYNCUSER unison -batch -auto -silent -log=false -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null

#  update content twice per hour
*/28 * * * *  SYNCUSER cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -silent -log=false -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null; done && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/" 2>/dev/null && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"2>/dev/null

Updated scripts:

Once data there, you will need munin-mergedb script to handle them, use a munin-cron script like my munin-cron-plus.pl instead of munin-cron so it actually calls munin-mergedb.pl. Plus you’ll need a fixed version of munin-graph so –host arguments are not blattlanly ignored (lacking RRD, it would fail to actually write graph for distant munin master process, but it would nonetheless delete existing graphs).

(Where these files go depends on your munin installation packaging. I have the munin processes in /usr/local/share/munin  and munin-cron-plus.pl in /usr/local/bin – it reflects the fact that original similar files are either in /usr/share/munin or /usr/bin. Beware, if you change the name of any munin process, update log rotation files otherwise you may easily fill up a disk drive, since it is kind of noisy especially when issues arise)

As conveniency, you can download these with my -utils-munin debian/devuan packages:

wget apt.rien.pl/stalag13-keyring.deb
dpkg -i apt.rien.pl/stalag13-keyring.deb
apt-get update
apt-get install stalag13-utils-munin

Once everything set up, you can test/debug it by typing:

su - munin --shell=/bin/bash

/usr/local/bin/munin-cron-plus.pl

What next?

Actually I’d welcome improvements munin-cron-plus.pl since it extract –host information in the most barbaric way. I am sure it can be done cleanly using Munin::Master::Config/else.

Then I’d welcome any insight about why munin-graph’s –host option does not works the way I’d like it. Maybe I misunderstand it’s exact purpose. The help reads:

 --host  Limit graphed hosts to . Multiple --host options
               may be supplied.

To me, it really means that it should not do anything at all to any files of hosts excluded this way. If it meant something else, maybe this should be explained.

Fixing black screen during boot caused by LVDS-panel presence assumption by GMA 3650 drivers

On a Intel DN2800MT-based system, so having Graphics Media Accelerator 3650 integrated processor graphic card, your screen turn to black/off during the boot process, exactly starting when the system switch to framebuffer if you connect a VGA screen (no problem so far with HDMI).

Passing nomodeset or any similar option is of no help.

You cannot invent it, apparently GMA 3600 kernel DRM driver always assumes there is a LVDS panel, as it would on laptop but probably not on home servers, and defaults to a 1920×1080 panel.

So you need to add to the grub kernel line:

video=LVDS-1:d

Or, in /etc/default/grub :

GRUB_CMDLINE_LINUX_DEFAULT="quiet video=LVDS-1:d"

And run update-grub afterwards.

Avoiding GPG issues while submitting to popularity-contest on Devuan

For some reason, on Devuan, popularity-contest submits fails with:

gpg: 4383FF7B81EEE66F: skipped: public key not found
gpg: /var/log/popularity-contest.new: encryption failed: public key not found

The Debian Popularity Contest being described as an attempt to map the usage of Debian packages, I think useful that it also get stats from disgrunted Debian users forced to use a fork of the same general scope.

I do not think it data transmitted in this context is really sensitive. So the simplest hack is just to set off encryption by adding to /etc/popularity-contest.conf:

ENCRYPT="no"

Preventing filenames with semicolon “:” to be garbled by Samba

In some cases, Samba garble file names, as backward compatibility with old Microsoft Windows system that cannot handle long filenames or filenames with specific characters. It would then be shown with the form XXXXX~X.ext

You can switch off this mecanism:

In /etc/samba/smb.conf, in [Global], add:

mangle case = no
mangled names = no

Then simply restart Samba (invoke-rc.d samba restart).

Files then will be listed with the real name. Not sure Microsoft Windows will however allow you to open the files.