Skip to content

Managing an SSH public keys ring with git

Using ssh-updatekeys, you can set up and maintain ~/.ssh/authorized_keys with specific sets on the fly.

You just have  to put your public keys on a public git repository. The script will fetch the keys, either by git + SSH (for write access) or just git + https (for read access).

It can handle different sets of keys (for instance you may want to differenciate keys with or without passphrares). In the git repository, any directory with a name starting by set (set0, setA, setTest, etc) will be treated as a set.

(ssh-updatekeys.sh is part of my -utils package).

Importing CardDav (ownCloud) contacts into (SquirrelMail) .abook

I’m still using SquirrelMail, even though it looks a bit old. It’s robust and just works – and when I’m using a webmail, that’s mandatory.

SquirrelMail does not use CardDav but some sort of .abook format (that I hope is the same abook as mutt).

I just wrote carddav2abook.pl, a wrapper that requires an ~/.carddav2abookrc with the following:

carddav = https://HOST/remote.php/carddav/addressbooks/USER/contacts_shared_by_USER?export
user = USER
password = PASSWORD
abook = /var/lib/squirrelmail/data/USER.abook
wget_args = --no-check-certificate

 

As you notice, I’m using a specific export account that has been given only read access to this file. Otherwise the CardDav url would not include the _shared_by_USER part.

I configured it to directly write .abook in SquirrelMail data directly. Obviously, it means you need to adjust read write access for the relevant user (or use www-data, but I would not recommend to store password in an rcfile given to this user).

Once it works, just put up a cronjob (with 2>/dev/null since the perl vCard module tends to print some garbage).

(carddav2abook.pl is part of my -utils-webmail package).

Fixing privileges to mount USB Keys with polkit

After switching to Devuan, I got suddenly dolphin complaining that it cannot mount an USB Key. Since Devuan is back to polkit to get rid of systemd (but polkit is probably part of the problem too), the fix is to add a /etc/polkit-1/localauthority/50-local.d/auto-mount.pkla with:

[Allow Automount]
Identity=unix-group:plugdev
Action=org.freedesktop.udisks2.filesystem-mount
ResultAny=yes
ResultInactive=yes
ResultActive=yes

And make sure users belong to plugdev.

Providing different DNS records (spoofed or not) depending on the client with Bind9

I did some major changes to my local server Bind9 setup. I was at the begin caching apt and steam depots on this server using dnsspoof from dsniff. After a few upgrades dnsspoof started to do nothing: it was up, on the proper device, noticing requests relevant to be spoofed but the end clients were still getting the real DNS records, not the spoofed ones.

So, I eventually decided to use directly Bind9, already up as a cache server, to do the spoofing.

Good, except that then nginx, running on the same server as Bind9, then required another resolver than Bind9 in order to get the real DNS records, since Bind9 was replying spoofed crap.

Bind9 is fully featured and I found that the easier way to get it do gives tailored replies depending on the clients is to use the views. But using views implies that every zones are included into views. You cannot just add a “view” for a given purpose and let your general setup.

A setup that should work more or less out of the box is provided with my packages -utils-cache-steam and -utils-cache-apt.

Using this, you must edit your /etc/bind/named.conf so it no longer directly include zones definition files but include the /etc/bind/named.conf.views that in turn will include relevant zones. Clients are set in /etc/bind/named.conf.acl and by default handle 192.168.1.1, 10.0.0.1 and 10.0.1.0 as server host (the two later are being used in my silent low energy consumption home server proposed setup).

It includes /etc/bind/named.conf.cache.sh that will (re-)generate zones definition files (named.conf.cache…) depending on the services you are actually caching.

This could probably be improved (annoying to make differences between 192.168, 10.0.0 and 10.0.1…) but it works fine. You can test by pinging packages.devuan.org from the server (loopback) or any clients.

Moving away from systemd with Devuan

A while ago, I was encouraging to give a try to systemd. Well, now I know better and decided to get away from this tool that clearly wants to replace many parts of my system at once. There are many articles about systemd, why it’s good and why it’s not. I kept on open mind of the topic, I tried systemd on many boxes. Some stuff just stopped to work, or did not work as I expected it to work. Maybe I’m clueless but I’m not alone. Point is with systemd, I’m able to do less and it takes me more time.

Now devuan is installable so I installed it already on two of my boxes. So far, no problem. I wonder how Devuan will cope with bugs reports and stuff in the long run.

The process is as simple as:

wget http://packages.devuan.org/devuan/pool/main/d/devuan-baseconf/devuan-baseconf_0.6.4%2bdevuan3_all.deb
# select ascii (= testing)
dpkg -i devuan-baseconf_0.6.4+devuan3_all.deb
cd /etc/apt/sources.list.d/
# comment  debian sources
nano sources.list
apt-get install devuan-keyring
apt-get update && apt-get upgrade
apt-get --purge remove systemd systemd-shim
dpkg --list | grep systemd
apt-get --purge remove libsystemd-journal0 libsystemd-login0
apt-get --purge autoremove
debfoster

Sharing uid to cope with inconsistent user and group names

One day you set up some service, for instance like this spam slayer setup. Later arises the situation when the distribution package use user account named X (for instance debian-spamd)  while you set up things to use another one (for instance Debian-exim).

The easiest fix is to give them a common uid:

usermod –non-unique –uid 101 debian-spamd

(101 being user id of Debian-exim account).

Obviously it could be best to review the setup to really use two separates accounts. But it’s up to you to decide whether it’s changing it.

You may also add –gid too.

Removing invalid/incomplete multibyte or wide character in filenames

Getting an old backup, from an ante-UTF-8 era, I found out many files had filenames with some characters unreadable, or partly readable depending on the software.

I tried my urlize script first but unac (that it depends on) failed with:

unac_string: Invalid or incomplete multibyte or wide character

The easiest way to get rid of these is simply use iconv, for instance doing in a directory with such files:

for file in *; do mv "$file" "`echo "$file" | iconv -f utf8 -c -t ascii//IGNORE`"; done

Follow

Get every new post delivered to your Inbox.