Converting video files to H.265/HEVC, no washed out colors and all streams mapped, with ffmpeg

I had a few 1080p video files using AV1 codec. Not sure why, if it is a player issue or hardware issue, nonetheless, my (slightly aging) laptop was having a hard time playing these while playing with no effort at all H.265/HEVC 2160p videos. The following command converts all mkv files in a folder to H.265/HEVC, not washing out colors and keep all streams:

for av in *.mkv; do ffmpeg -i "$av" -c:v libx265 -color_trc smpte2084 -color_primaries bt2020 -crf 22 -map 0 -map_metadata 0 -map_chapters 0 -c:a copy -c:s copy "${av%.*}"-x265.mkv ; done

In my case, it results in slightly larger files but, and that was the point, these play on the laptop with no noticeable CPU-usage:

1,2G 'XX1 AV1 Opus [AV1D].mkv'
1,1G 'XX1 AV1 Opus [AV1D]-x265.mkv'
830M 'XX2 AV1 Opus [AV1D].mkv'
951M 'XX2 AV1 Opus [AV1D]-x265.mkv'

Generating static galleries with nanogallery2

As wrote earlier, I now use nanogallery2 to share pictures due to nextcloud removal of the earlier gallery app and the fact that the replacement app called Photos does not match my needs.

It is copied directly from the cloud LXC container, with rsync and ssh, to another server that only serves static files via HTTPS – with access restricted through usual basic authentication. It means that I am not giving any access to the cloud infrastructure to the HTTPS server.

nanogallery2 have associated “providers” to easily set up a gallery (directly from a local directory or distant storage). But that implies on-the-fly index and thumbnails generation by a PHP CGI server while I prefer the server to be serving only static files.

So I made a small perl script called to build a single index and thumbnails, using subdirectories as distinct galleries:

# Copyright (c) 2020 Mathieu Roy <>
#   This program is free software; you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation; either version 2 of the License, or
#   (at your option) any later version.
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   GNU General Public License for more details.
#   You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#   Foundation, Inc., 59 Temple Place,d Suite 330, Boston, MA  02111-1307
#   USA

use strict "vars";
use File::Find;
use File::Basename;
use Image::ExifTool;
use Image::Magick;
use Fcntl ':flock';
use CGI qw(:standard escapeHTML);

my $debug = 1;
my $path = $ARGV[0];
my $topdirstring = "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ";   # hopefully no such directory exists
my @topimages;
my %subdirsimages;
my %comment;
my %gps;
my %model;
my %focalength;
my %flash;
my %exposure;
my %iso;
my $images_types = "png|gif|jpg|jpeg";
my @mandatory = ("",

my @mandatoryfont = ("",

# silently forbid concurrent runs
# (
open(LOCK, "< $0") or die "Failed to ask lock. Exit";
flock(LOCK, LOCK_EX | LOCK_NB) or exit;

sub thumbnail_dimensions {
        my ($x, $y, $ratio, $height, $width);
        ($x, $y) = @_;
        $ratio = $x / $y;	
	$width = 200;
	$height = int($width / $ratio);
        return $width, $height;

sub thumbnail_name {
    my ($name,$path) = fileparse($_[0]);
    return "$path.thumbnail.$name";

sub wanted {
    # skip directories and non-images in general
    next if -d $_;
    next unless lc($_) =~ /\.($images_types)$/i;
    # skip thumbnails
    next if $_ =~ /^\.thumbnail\./i;

    # create thumbnail if missing
    unless (-e thumbnail_name($File::Find::name)) {
	my ($image, $error, $x, $y, $size, $format, $width, $height);
        $image = Image::Magick->new;
        $error = $image->Read($File::Find::name);
	if (!$error) {
	    ($x, $y, $size, $format) = $image->Ping($File::Find::name);
	    ($width, $height) = thumbnail_dimensions($x, $y);
	    $image->Scale(width=>$width, height=>$height);
	    $error = $image->Write(thumbnail_name($File::Find::name));
    # identify URL, path removed 
    my $url = substr($File::Find::name, length($path)+1, length($File::Find::name));    

    # name will be based on file creation (using comment would require utf8 checks)
    # according to exif data, not real filesystem info
    my $exifTool = new Image::ExifTool;
    my %exifTool_options = (DateFormat => '%x');
    my $info = $exifTool->ImageInfo($File::Find::name, ("CreateDate","GPSLatitude","GPSLongitude","GPSDateTime", "Make", "Model", "Software", "FocalLength", "Flash", "Exposure", "ISO"), \%exifTool_options);
    $comment{$url} = $info->{"CreateDate"};
    $comment{$url} = $info->{"GPSDateTime"} if $info->{"GPSDateTime"};
    if ($info->{"GPSLatitude"} and $info->{"GPSLongitude"}) {
	$gps{$url} = $info->{"GPSLatitude"}." ".$info->{"GPSLongitude"};
	$gps{$url} =~ s/\sdeg/°/g;

    $model{$url} = $info->{"Make"}." " if $info->{"Make"};
    $model{$url} .= $info->{"Model"}." " if $info->{"Model"};
    $model{$url} .= "(".$info->{"Software"}.")" if $info->{"Software"};

    $focalength{$url} = $info->{"FocalLength"} if $info->{"FocalLength"};
    $flash{$url} = $info->{"Flash"} if $info->{"Flash"};
    $exposure{$url} = $info->{"Exposure"} if $info->{"Exposure"};
    $iso{$url} = $info->{"ISO"} if $info->{"ISO"};

    # top directory image
    push(@{$subdirsimages{$topdirstring}}, $url) and next if ($File::Find::dir eq $path);    

    # other images
    # identify subdir by removing the path and keeping only the resulting top directory
    my ($subdir,) = split("/", substr($File::Find::dir, length($path)+1, length($File::Find::dir)));
    push(@{$subdirsimages{$subdir}}, $url);

### RUN

chdir($path) or die "Unable to enter $path, exiting";

# list images in top dir and subdirectories
find(\&wanted, $path);

# create the output
# first check if mandatory files are there
foreach my $file (@mandatory) {
    next if -e "$path/".basename($file);
    system("/usr/bin/wget", $file);
mkdir("$path/font") unless -e  "$path/font";
foreach my $file (@mandatoryfont) {
    next if -e "$path/font/".basename($file);
    system("/usr/bin/wget", $file);
# create index
open(INDEX, "> $path/index.html");
print INDEX '<!DOCTYPE html
	PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="">
    <meta name="generator" content="" />
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <!-- jQuery -->
    <script type="text/javascript"  src=""></script>
    <!-- nanogallery2 -->
    <link href="nanogallery2.min.css" rel="stylesheet" type="text/css" />
    <script  type="text/javascript" src="jquery.nanogallery2.min.js"></script>
    <!-- other -->
    <link type="text/css" rel="stylesheet" href="style.css" />

my $galleriescount = 0;

for (reverse(sort(keys(%subdirsimages)))) {
    # top dir gallery has no specific ID or title
    if ($galleriescount > 0) {	
	print INDEX '    <h1>'.$_."</h1>\n";
	print INDEX '    <div id="nanogallery2'.$_;
    } else {
	print INDEX '    <div id="nanogallery2';

    # add gallery setup
    print INDEX '" data-nanogallery2 = \'{
      "viewerToolbar":   {
        "display":    true,
        "standard":   "label, infoButton",
        "minimized":  "label, infoButton"
      "viewerTools":     {
        "topLeft":    "pageCounter, playPauseButton",
        "topRight":   "downloadButton, fullscreenButton, closeButton"
      "thumbnailWidth": "auto",
      "galleryDisplayMode": "pagination",
      "galleryMaxRows": 3,
      "thumbnailHoverEffect2": "borderLighter"

    # add image list
    for (sort(@{$subdirsimages{$_}})) {
	my $extra;
	$extra .= ' data-ngexiflocation="'.escapeHTML($gps{$_}).'"' if $gps{$_};
	$extra .= ' data-ngexifmodel="'.escapeHTML($model{$_}).'"' if $model{$_};
	$extra .= ' data-ngexiffocallength="'.escapeHTML($focalength{$_}).'"' if $focalength{$_};
	$extra .= ' data-ngexifflash="'.escapeHTML($flash{$_}).'"' if $flash{$_};
	$extra .= ' data-ngexifexposure="'.escapeHTML($exposure{$_}).'"' if $exposure{$_};
	$extra .= ' data-ngexifiso="'.escapeHTML($iso{$_}).'"' if $iso{$_};

	# if no comment set, check if if the filename might be YYMMDD_
	$comment{$_} = "$3/$2/20$1" if (!$comment{$_} and /(\d{2})(\d{2})(\d{2})_[^\\]*$/);  	
	print INDEX '      <a href="'.$_.'" data-ngThumb="'.thumbnail_name($_).'"'.$extra.'>'.escapeHTML($comment{$_})."</a>\n";
print INDEX '

# finish index	
	print INDEX


# EOF 

To run it:

chmod +x /usr/local/bin/
/usr/local/bin/ /var/www/html/pictures/

I have the following crontab for the dedicated user:

*/15 * * * *     export LC_ALL=fr_FR.UTF-8 && /usr/local/bin/ /var/www/html/pictures/thistopic

Fetching mails from with lieer/notmuch instead of fetchmail

Google is gradually making traditional IMAPS access to impossible, in it’s usual opaque way. It is claimed to be a security plus, though, if your data is already on, it means that you are already accepting that it can and is spied on, so whether the extra fuss is justified is not so obvious.

Nonetheless, I have a few secondary old boxes on gmail, that I’d like to still extract mails from, not to miss any, without bothering connecting to with a web browser. But the usual fetchmail setup is no longer reliable.

The easiest alternative is to use lieer, notmuch and procmail together.

apt install links notmuch lieer procmail

# set up as regular user
su enduser

mkdir -p ~/mail/$boxname
notmuch new
cd ~/mail/$boxname
gmi init $

# at this moment, you'll get an URL to connect to 
# with a web browser
# if it started links instead, exit cleanly (no control-C)
# to get the URL
# that will then return an URL to a localhost:8080
# to no effect if you are on a distant server
# in this case, just run, in an another terminal
links "localhost:8080/..."

The setup should be ok. You can check and toy a bit with the setup:

gmi sync
notmuch new
notmuch search tag:unread
notmuch show --format=mbox thread:000000000000009c

Then you should mark all previous messages as read:

for thread in `notmuch search --output=threads tag:unread`; do echo "$thread" && notmuch tag -unread "$thread" ; done
gmi push

Finally, we set up the mail forward (in my case, the fetch action is done in a dedicated container, so it is forwarded to ‘user’ by SMTP to the destination host, but procmail allows any setup) and fetchmail script: each new unread mail is forwarded and then marked as read:

echo ':0
! user' > ~/.procmail

echo '#!/bin/bash


# no concurrent run
if pidof -o %PPID -x "">/dev/null; then
        exit 1

for box in $BOXES; do
    cd ~/mail/$box/
    # get data
    gmi pull --quiet
    notmuch new --quiet >/dev/null
    for thread in `notmuch search --output=threads tag:unread`; do
	# send unread through procmail
	notmuch show --format=mbox "$thread" | procmail
	# mark as read
	notmuch tag -unread "$thread"
    # send data
    gmi push --quiet

# EOF' > ~/
chmod +x ~/

Then it is enough to call the script (with BOXES=”boxname” properly set) and to include it in cronjob, like using crontab -e`

notmuch does not remove mails.

No-fuss setting user-specific locales (for instance for XFCE with ligthdm or slim)

End of 2018, you’d think, by now, that locales setup should not be a concern. But, still, in the case of user-specific configuration, mismatching the system locale (granted, that must not so be so common), I got various odd results. Like lightdm not setting anything no matter what you select on the login window. Or, worse, half-assed setup, with LANG being set and then unset, of LANGUAGE being not set but still expected by some apps, with a desktop with no option to configure it like XFCE.

After a few tests, turns out that user .xsessionrc works perfecly, independantly from desktop environment or desktop login manager:

echo "export LANG=fr_FR.UTF-8
export LANGUAGE=fr_FR.UTF-8" >> ~/.xsessionrc

with french (fr_FR) selected here.

Checking mails/addressbook/calendars with IMAPS (Dovecot) + DAV (ownCloud)

As a followup to my article Replicating IMAPs (dovecot) mails folders and sharing (through ownCloud) contacts (kmail, roundcube, etc),  I’d like to point out that, these days, I almost completely dropped Kmail (only use it on a laptop, mostly because I do not use the laptop frequently enough to bother) and switched to Thunderbird.

Using Thunderbird enables me to use cool Firefox modules like S3.Google Translator (note that Kmail also has a similar functionality) and works decently with modules Lightning and Inverse Sogo Connector for proper CardDav and CalDav handling. I went away from Kmail due to still existing akonadi issues after so many years and the fact I was still forced to run ‘qdbus org.kde.kded /modules/networkstatus setNetworkStatus ntrack 4’ after suspend for it to notice network is on. In general, I do not think KDE people are going in a direction that makes sense for me and Kmail was almost the last piece of KDE I was still using (since they more or less killed Konqueror themselves). I still enjoy Dolphin though, especially for the group results and filter bar.

Regarding Roundcube, CardDav is nicely handled by RCMCardDav even though it requires a bit a work to properly deal with dependencies.

Sharing Firefox bookmarks through your Next/ownCloud/whatever ?

Recent changes in Firefox makes bookmark and password cloud apps a pain to set up, with obligatory fiddling in about:config and removed config on upgrade or else.

An alternative would be to use Firefox Sync. But I am not using only Firefox and I do not like the notion of using a solution tied to one browser. Plus, installing Firefox Sync on your own software is, last time I checked, neither properly documented or made to use existing cloud authentication.

Password-wise, I switched over KeePassXC. I am not documenting my setup now because it still experimental and I know password managers are subject to hostile hacks. So I’d would not encourage people to use an half-baked setup.

Bookmark-wise, I tried a few things. You can fiddle around places.sqlite but it changes so much that any sync of this file on a cloud is bound to generate lot of useless trafic in best scenario, conflicts otherwise.

However, Firefox save automatically backups of bookmarks in .json (simingly compressed with lz4, though package liblz4-tool in Devuan/Debian is not helpful is decompressing them) in .mozilla/firefox/random.default/bookmarkbackups/  and this directory can easily be synced.

Then, on another client, when you open the bookmarks window, you are presented with the option to load such backups, telling how many entries are within each backup. The process is half automated  – far from perfect but much less broken than anything I tried so far.

I am sure it could be possible to improve this to a fully automated solution (adding new entries is easy to handle, noticing removal a bit less, it would require some database).  I’d be interested in any alternative.

Pick a font able to properly render a string composed of Unicode characters with Perl

In the case of automated watermarking with randomly picked fonts within a Perl script, it is quite annoying to stumble on fonts missing many non-basic unicode characters (accents, etc). In French, you’ll likely miss the ê or ü or even é or à. In Polish, while the ł is often provided, you’ll like miss ź.

The Perl module Font::FreeType is quite convenient in this regard. The sample code here will try to find a font, within the @fonts list, able to render the $string.  It will pick the fonts randomly, one by one, and check every character of the string against the characters provided by the font. It will stop to pick the first one that actually can fully render the string:

use Font::FreeType;
use utf8; # must occur before any string definition!
use strict;

my @image_tags = "~ł ajàüd&é)=ù\$;«~źmn";
my @fonts = ("/usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf", "/usr/share/fonts/truetype/zeppelin.ttf", "/usr/share/fonts/truetype/Barrio-Regular.ttf");
my %fonts_characters;
my $watermark_font;

# we want a random font: but we also want a font that can print every character
# (not obvious with utf8)
# loop until we find a suitable one (all chars are valid, so the chars counter reached 0) or,
# worse case scenario, until we checked them all (means more suitable fonts should be added)
my $chars_to_check = length("#".@image_tags[0]);
my $fonts_to_check = scalar(@fonts);
my %fonts_checked;
while ($chars_to_check > 0 and $fonts_to_check > 0) {

 # pick a random font
 $watermark_font = $fonts[rand @fonts];
 # if this font was already probed, pick another one
 next if $fonts_checked{$watermark_font};
 $fonts_checked{$watermark_font} = 1; 

 # always reset the chars counter each time we try a font
 $chars_to_check = length("#".@image_tags[0]);
 print "Selected font $watermark_font (to check: $fonts_to_check)\n";
 # if not yet already, build list of available chars with this font
 unless ($fonts_characters{$watermark_font}) {
 sub {
 my $char_chr = chr($_->char_code);
 my $char_code = $_->char_code;
 $fonts_characters{$watermark_font}{$char_chr} = $char_code;
 print "Slurped $watermark_font chars\n";
 # then check if every available character of the watermark exists in this font
 for (split //, "#".@image_tags[0]) {
 print "Check $_\n";
 # breaks out if missing char
 last unless $fonts_characters{$watermark_font}{$_};
 # otherwise decrement counter of chars to check: if we reach 0, they are all valid
 # and we should get out of the font picking loop 
 print "Chars still to check $chars_to_check\n";
 # we also record there is one less font to check

print "FONT PICKED $watermark_font\n";

This code is actually included in my script (hence the variables name).

Obviously, if no font is suitable, it’ll take the last one tested. It won’t go as far as comparing which one is the most suitable, since in the context of this script, if no fonts can fully render a tag, the only sensible course is to add more (unicode capable) fonts to the fonts/ directory.

Using networked filesystems hosted by LXC containers with Samba

For more than a decade, I used NFS on my home server to share files. I did not consider using Samba for anything but to provide Windows access to shares. NFSv3 then NFSv4 suited me, allowing per host/IP write access policy. The only main drawback was very crude handling of NFS server downtime: X sessions would be half-frozen, requiring restart to be usable once again.

However, I moved recently my servers to LXC (which I’ll probably document a bit later) and NFS server on Debian, as you can guess from nfs-kernel-server package’s name, is kernel-based: not only it apparently defeats the purpose of LXC containers to actually have a server within a container tied to the kernel, but it does not seems to really work reliably. I managed to get it running, but it had to be run on both the master host and within the container. Even then, depending which started first could make the shares unavailable to hosts.

I checked a few articles over the web (, etc) and it looked that, as of today, you can expect decent performances from Samba, as much as of NFS. That could possibly be proven wrong if I was using massively NFS, writing a lot through networked file systems, opening a big number of files simultaneously, moving big files around a lot, but I have really simple requirements: no latency when browsing directories, no latency when playing 720p/1080p videos and that’s about it.

I had already a restricted write access directory per user, via Samba, but I use it only on lame systems as temporary area: on proper systems, I use SSH/scp/rsync/git to manipulate/save files.

Dropping NFS, I have now quite a simple setup, here are relevant parts of my /etc/samba/smb.conf:


## Browsing/Identification ###

# What naming service and in what order should we use to resolve host names
# to IP addresses
 name resolve order = lmhosts host wins bcast

#### Networking ####

# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
 interfaces = eth0

# Only bind to the named interfaces and/or networks; you must use the
# 'interfaces' option above to use this.
# It is recommended that you enable this feature if your Samba machine is
# not protected by a firewall or is a firewall itself. However, this
# option cannot handle dynamic or non-broadcast interfaces correctly.
 bind interfaces only = true

#### File names ####

# remove characters forbidden on Windows
mangled names = no

# charsets
dos charset = iso8859-15
unix charset = UTF8

####### Authentication #######

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
 security = user

# Private network
 hosts allow = 192.168.1.

# You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
 encrypt passwords = true

# If you are using encrypted passwords, Samba will need to know what
# password database type you are using. 
 passdb backend = tdbsam

obey pam restrictions = yes

guest account = nobody
 invalid users = root bin daemon adm sync shutdown halt mail news uucp operator www-data sshd Debian-exim debian-transmission
 map to guest = bad user

# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
 unix password sync = yes

#======================= Share Definitions =======================

realm = ...

comment = Commun
path = /srv/common
browseable = yes
writable = yes
public = yes
guest ok = yes
valid users = @smbusers
force group = smbusers
create mode = 0660
directory mode = 0770
force create mode = 0660
force directory mode = 0770

comment = Données protégées
path = /srv/users/thisuser
browseable = yes
writable = yes
public = yes
valid users = thisuser
create mode = 0600
directory mode = 0700
force create mode = 0600
force directory mode = 0700
guest ok = no


I installed package libpam-smbpass and edited /etc/pam.d/samba as follow:

@include common-auth
@include common-account
@include common-session-noninteractive
@include common-password

For this setup to work, you need every user allowed to connect:

  • to be member of group smbusers – including nobody (or whatever the guest account is) ;
  • to have a unix password set ;
  • to be known to samba (smbpasswd -e thisuser or option -a).

If you are not interested in per user access restricted area, only nobody account will need to be taken care of.

And, obviously, files and directories ownership and modes must be set accordingly:

cd /srv/common
# (0770/drwxrwx---) GID : (nnnnn/smbusers)
find . -type d -print0 | xargs -0 chmod 770 -v
find . -type f -print0 | xargs -0 chmod 660 -v
cd /srv/users
# (0700/drwx------) UID : ( nnnn/ thisuser) GID : ( nnnn/ thisuser)
find . -type d -print0 | xargs -0 chmod 700 -v
find . -type f -print0 | xargs -0 chmod 600 -v
# main directories, in addition, need sticky bit some future directory get proper modes
chmod 2770 /srv/common/*
chmod 2700 /srv/users/*

To access this transparently over GNU/Linux systems, just add in /etc/fstab:

//servername/commun /mountpoint cifs guest,uid=nobody,gid=users,iocharset=utf8 0 0

This assumes that any users entitled to access files belongs to users group. If not, update accordingly.

With this setup, there is no longer any IP based specific write access set but, over years, I found out it was quite useless for my setup.

The only issue I have is with files with colon within  (“:”). Due to MS Windows limitations, CIFS list these files but access is made impossible. The easier fix I found was to actually rename these files (not a problem due to the nature of the files served) through a cronjob /etc/cron.hourly/uncolon :

# a permanent cifs based fix would be welcomed
find "/srv" -name '*:*' -exec rename 's/://g' {} +

but I’d be interested in better options.



Add proportional label/watermark to images with ImageMagick

There is nothing exceptional here, there are many documented ways to achieve this.

Still, follows the most efficient way I found to do so on any images within a directory, after testing quite a few: I wanted to be able to add a small tag on top, size being propertional to the image (that excluded many solutions based on -pointsize), on static images as well as animated ones.

rm -f *marked*

for pic in *.jpg *.png *.gif; do
   # identify is messed up with gif (give width per frame), use convert
   # instead
   width=$(convert $pic -print "%w" /dev/null)
   height=$(expr 5 \* $width \/ 100)

   # first build a watermark so we can easily manipulate it is size afterwards
   convert -size ${width}x${height} -gravity NorthEast -stroke "#454545" -fill "#c8c8c8" -background transparent -strokewidth 1 -font LiberationSerif-Bold.ttf label:"$pic" ${pic//.$ext}-mark.png
   # then add it
   convert $pic -coalesce -gravity NorthEast null: ${pic//.$ext}-mark.png -layers composite -layers optimize ${pic//.$ext}-marked.$ext

   rm ${pic//.$ext}-mark.png

Within a perl script, it would be something like (beware, most variables were set earlier in the script):

    # build watermark (for now minimal checks, assume files are regular) 
    my ($watermark_h, $watermark) = tempfile(SUFFIX => ".png", 
					     UNLINK => $unlink_temp);
    my $watermark_width = $$image_info{"ImageWidth"};
    my $watermark_height = $watermark_proportion * $watermark_width / 100;
	   "-size", $watermark_width."x".$watermark_height,
	   "-gravity", "NorthEast",
	   "-stroke", "#454545",
	   "-fill", "#c8c8c8",
	   "-background", "transparent",
	   "-strokewidth", "1",
	   "-font", $watermark_font,

    # add watermark
	   "-gravity", "NorthEast",
	   "-layers", "composite",
	   "-layers", "optimize",

Yeah, so that does mean that my script managing a tumblr posts-queue locally with #Tags was updated to add these #Tags as watermark/label. Note that it won’t alter the original file, only the one posted.

New options can be added in ~/.tumblrrc:

# no watermark/label

# specific font

Note that it requires ImageMagick to be installed.

Using same soundcard among users with PulseAudio not in system mode

Sound on GNU/Linux never have been convenient. Right now, de facto standard is PulseAudio: yeah, made by the same people that does this nightmare of systemd. When it works it is better than just ALSA  (Advanced Linux Sound Architecture). When it doesn’t, you’re in for a headache.

Anyway, I had this situation where I wanted user whatever to be able to use the soundcard. But the soundcard was blocked and reserved by PulseAudio started by my regular user account.

First option is to make PulseAudio work as a system daemon. UNIX-style option. Quite obviously, that would be too easy to implement for these systemd people. So they implemented the option altogether advising not to use it. I did not care about the advice, though, so I tried. And then I understood why, while advising not to use it, they said they would not be accountable for problems using it. Because it is utter trash, unreliable, giving out error endless messages and, in the end, not working at all.

 So the system mode is a no-go, in the short run and definitely not in the long run either.

Alternate option is to open PulseAudio through the loopback network device. To do so, in /etc/pulse/ add the TCP module with

load-module module-native-protocol-tcp auth-ip-acl=

Obviously, by default tcpwrapper will refuse access, so you have also to add the relevant counterpart in /etc/hosts.allow :


From now on, after restarting PulseAudio, you should be able to access it through any user (in audio group).

Update: some comments on reddit made me think there has been a misunderstanding on the scope of this post. It is not to describe inner workings of audio on common GNU/Linux systems with PulseAudio. The following does and almost perfectly explain why I did not bother get specific on the topic :