Setting up LXC containers to run with ISC DHCPd and BIND instead of dnsmasq, along with domain name spoofing/caching

By default, lxc-net setup containers to work along with dnsmasq, which provides both DNS and dhcpd services, name resolution and IP attribution.

Recommended setup of lxc-net includes /etc/lxc/dnsmasq.conf that only states  “dhcp-hostsfile=…” and the said dhcp-hostfiles as /etc/lxc/dnsmasq-hosts.conf with a line “hostname,IP” per host.

It works fine and there is no real reason to use anything else. Though it is obvious that lxc-net lacks a bit of modularity, since it is clearly tied, hardcoded, to dnsmasq for instance.

Except that on my main server, I already have ISC DHCPd serving IP to local area network and BIND 9 not only doing name resolution caching but also name resolution for said local area network. Not only having both dnsmasq and BIND 9 and ISC DHCPd is a bit overkill, but it requires additional config to bind them to specific interfaces to avoid conflicts.

dnsmasq shutdown

We could simply do a killall dnsmasq and comment the part in /usr/lib/x86_64-linux-gnu/lxc/lxc-net where it get started. For now, we’ll just prevent it from messing with interfaces, setting /etc/lxc/dnsmasq.conf to:


Initial setup

This article assumes you already have BIND and ISC DHCPd set up for local area network (otherwise, as said, in most use cases, dnsmasq will be just fine).

If you do not have a preexisting setup but wants, nonetheless, switch to BIND 9 and ISC DHPCd, you could start with the bind setup provided in my setting up a silent/low energy consumption home server article.

This article includes dynamic clients name update. The only thing to pay attention is that this setup use for local area network whereas, in the following article, will be used for LXC bridge network while will be dedicated to local area network.

DNS setup

I adjusted my preexisting setup (bind9 files part of my -utils-cache-spoof debian package, which I suggest you look at directly to have their current exhaustive content) based on bind9 notion of ACL (access control list) depending on which network clients belongs and, subsequently, bind9 notion of “views” that configure which zones are provided to these clients according to ACL.

The following will seems like a lot but, if you grab my debian -utils-cache-spoof package, it is actually not that much.

Since LXC bridge here is using network, I have in name.conf.acl:


acl lan {
    // the cache host IP should not be part of regular lan ACL
    // private IPv4 address spaces;;;

acl lannocache {
   // counterpart of earlier statement: cache host needs proper unspoofed name resolution;

Note that the .88 container IP is dedicated to caching (apt/steam as in my previous setup with dsniff as spoofer and my another setup using bind9 instead but outside of LXC host/container context) so it needs to be excluded from the general ACL.

These ACL are in turn used in named.conf.views:

// clients are set in named.conf.acl
include "/etc/bind/named.conf.acl";

// loopback view, for the server itself
view "loopback" {
     match-clients { loopback; };
     include "/etc/bind/named.conf.default-zones";
     include "/etc/bind/named.conf.local";
     include "/etc/bind/";

// otherwise local network area
view "lan" {
 match-clients { lan; };
 include "/etc/bind/named.conf.default-zones";
 include "/etc/bind/named.conf.local";
 include "/etc/bind/named.conf.cache";
 include "/etc/bind/";

// local network area without cache, for host that will get unspoofed name resolution
// (needs to be set up one by one in named.conf.acl)
view "lannocache" {
 match-clients { lannocache; };
 include "/etc/bind/named.conf.default-zones";
 include "/etc/bind/named.conf.local";
 include "/etc/bind/";


Obviously, if there was no notion of caching (and name spoofing) , the setup would be even more straightforward, a single view would be enough. Nonetheless, this example shows an easy way to treat differently hosts depending whether they are LXC containers or regular LAN clients.

About the zones included (or not) in views (all files being in /etc/bind):

So basically, you need to edit /etc/bind/name.conf.local to something like:

// to store A/CNAME records for DOMAIN.EXT
zone "DOMAIN.EXT" {
 type master;
 notify no;
 file "/etc/bind/db.DOMAIN.EXT";
 allow-update { key ddns; };

// (we use for regular LAN)
// to store PTR records (IP to name) for regular LAN 
zone "" {
 type master;
 notify no;
 file "/etc/bind/db.192.168.1";
 allow-update { key ddns; };

// (we use for LXC bridge)
// to store PTR records for LXC bridge)
zone "" {
 type master;
 notify no;
 file "/etc/bind/db.10.0.0";
 allow-update { key ddns; };

You also require relevant db. files: for instance pointing to loopback to filter ads/spam sources, db.cache pointing to the cache container .88 (possibly also db.cacheBASEIP) and local db. files as db.DOMAIN.EXT:

$TTL 86400 ; 1 day
                        2823 ; serial
                        28800 ; refresh (8 hours)
                        7200 ; retry (2 hours)
                        604800 ; expire (1 week)
                        10800 ; minimum (3 hours)
          NS     server.DOMAIN.EXT.
          MX     10 server.DOMAIN.EXT.
server    A
; the rest will be filled by ddns

Likewise, you should have db.192.168.1 and db.10.0.0 (obviously with 1.168.192 replaced by 0.0.10) as:

$TTL 86400 ; 1 day IN SOA server.DOMAIN.EXT. root.DOMAIN.EXT. (
                       2803 ; serial
                       28800 ; refresh (8 hours)
                       7200 ; retry (2 hours)
                       604800 ; expire (1 week)
                       10800 ; minimum (3 hours)
           NS      server.DOMAIN.EXT.
1          PTR     server.DOMAIN.EXT.
; the rest will be filled by ddns too

And then you must run the scripts to generate named.conf.cacheBASEIP and You’ll probably need to edit /etc/bind/ variables according to what you are actually caching.

BIND gets updates from ISC DHCPd whenever a new clients get a lease, it is configured in name.conf.dhcp (not packaged):

include "/etc/bind/ddns.key";

controls {
 inet allow { localhost; } keys { ddns; };

The ddns key was generated as documented in my setting up a silent/low energy consumption home server article as well as in Debian docs:

# dnssec-keygen -a HMAC-MD5 -b 128 -r /dev/urandom -n USER ddns

Out of the generated Kdhcp_updater.*.private, you get the content of the “Key:” statement and you put it in /etc/bind/ddns.key:

key ddns {
 algorithm HMAC-MD5;

So this setup implies that your named.conf looks like:

include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.dhcp";
include "/etc/bind/named.conf.views";

Besides, my /etc/bind/named.conf.options is generated by /etc/dhcp/dhclient-exit-hooks.d/bind so it include proper forwarders and listen-on exception.

That should cover it for BIND.

ISC DHPCd setup

In my case, I still still want IPs of LXC containers to be fixed. The syntax of /etc/lxc/dnsmasq-hosts.conf was “hostname,IP” per line which is more convenient than ISC DHCPD syntax  “host hostname { hardware ethernet MAC ADDRESS; fixed-address IP; }”.

I decided to use the same /etc/lxc/dnsmasq-hosts.conf symlinked to /etc/lxc/hosts.conf that will be used by the /etc/lxc/ (not packaged for now) script to generate /etc/dhcp/dhcpd_lxc-hosts.conf:

# /etc/lxc/

HOSTS=/etc/lxc/hosts.conf # similar to dnsmasq-hosts.conf: host,IP
LXC_PATH=`lxc-config lxc.lxcpath`

for container in *; do
 if [ ! -d "$container" ]; then continue; fi
 if [ ! -e "$container/config" ]; then continue ; fi
 echo "host lxc-$container {" >> $DESTINATION
 echo " hardware ethernet "`cat "$container/config" | grep | cut -f 2 -d "="`";" >> $DESTINATI
 echo " fixed-address "`cat "$HOSTS" | grep "$container" | cut -f 2 -d ","`";" >> $DESTINATION
 echo "}" >> $DESTINATION 

This primitive script will sprout out a proper ISC DHCPd host file. You have to run it each time you create a new container. Once done, we simply edit /etc/dhcp/dhcpd.conf:

# The ddns-updates-style parameter controls whether or not the server will
# attempt to do a DNS update when a lease is confirmed. We default to the
# behavior of the version 2 packages ('none', since DHCP v2 didn't
# have support for DDNS.)
ddns-updates on;
ddns-update-style interim;
ddns-domainname "DOMAIN.EXT";
ddns-rev-domainname "";
ignore client-updates; # no touching the FQDN
include "/etc/dhcp/ddns.key";

# option definitions common to all supported networks...
option domain-name "DOMAIN.EXT";
option domain-search "DOMAIN.EXT", "ANOTHERDOMAIN.EXT";
option domain-name-servers;
option routers;

default-lease-time 600;
max-lease-time 6000;
update-static-leases on;

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.

# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;

# LAN clients
subnet netmask {

 # dynamic IP depends whether the client MAC address is known
 pool {
   deny unknown-clients;
 pool {
   allow unknown-clients; 

 # iPXE / boot on lan
 if exists user-class and option user-class = "iPXE" {
   filename "ipxe-boot";
 } else {
   filename "undionly.kpxe";

# LXC clients
subnet netmask {
 # use the subnet-specific router
 option routers;
 # no pool, all IP are fixed here
 # force lease time to be at least weekly
 min-lease-time 604800;
 max-lease-time 604800;
 # no boot on lan either

# zones
zone DOMAIN.EXT. {
 key ddns;
zone {
 key ddns;
zone {
 key ddns;

# LAN known clients 
 host trendnetusb { hardware ethernet 00:50:b6:08:xx:xx; }
 host ugreenusb { hardware ethernet 00:0e:c6:fa:xx:xx; }

# LXC host
include "/etc/dhcp/dhcpd_lxc-hosts.conf";

That’s all. Obviously, if you want your LXC containers to get completely dynamically assigned IP, you do not even need this whole host setup. You just set a pool { } with a range of IP (and remove the specif lease time).

The cache LXC container

I wont get in much details, my my -utils-cache-apt and -utils-cache-steam debian packages should work out of the box on a LXC container, providing both the necessary nginx cache-apt and cache-steam config.

If you use resolvconf and ISC DHCP clients on LXC containers, the resolvconf to nginx resolver config script will set up /etc/nginx/conf.d/resolver.conf accordingly.

If you use udhcpc, this resolvconf script will be ignored  but the default /etc/nginx/conf.d/resolver.conf includes, in comments, proposed changes to /etc/udhcpc/default.script to generate  /etc/nginx/conf.d/resolver.conf accordingly.

Otherwise, you need to hand configure /etc/nginx/conf.d/resolver.conf

## (set resolver to something else if your local interface got
## domain names spoofed, for Google resolver for example.
#resolver ipv6=off; # without lxc
resolver ipv6=off;   # within lxc


I have this setup since a while and noticed the following:

  • with ISC DHCP client within the LXC containers I get the bad udp checksums in N packets issue;  the iptables -A POSTROUTING -t mangle -p udp –dport 67 -j CHECKSUM  –checksum-fill rule set up by lxc-net is helpless; the solution i picked is to use udhcpc within all LXC containers that does not trigger the problem, with the obvious drawback that the cache container must use the edited /etc/udhcpc/default.script option since resolvconf will have no effect;
  • if ISC DHPCd and Bind9, on the LXC host, are started before or at the same time as lxc and lxc-net, they may not listen on the LXC bridge interface, possibly missing at their starting time; as result, while everything could seem properly on, LXC container would fail to get an IP assigned until you restart ISC DHPCd;  this does not occur after adding lxc lxc-net in the Should-Start: part of ISC DHCPd and Bind9 init.d scripts.

Avoiding dnsmasq interference

If you are satistified and do not require dnsmasq anymore, I suggest to remove any dnsmasq package and add a symlink so dnsmasq command produces no error (when called by /usr/lib/x86_64-linux-gnu/lxc/lxc-net for instance):

ln -s /bin/true /usr/bin/dnsmasq

Revision control and distribution of home configuration files with Bash and git

For year, I managed different copies of home configuration files over different hosts with some revision control,  but however better are modern system like git in comparison to old CVS, it would still be quite unpractical to put your whole home directory within one single repository:

  • for obvious reason, there are only a few files that you can actually move around carelessly and put on gitlab for instance; but these files are actually nice to have there, so you can retrieve them whenever and wherever you want;
  • even if you could/would made the rest of your home directory public, most of the configuration files cannot adjust to each host they are run onto; you can obviously adjust a ~/.bashrc according to $HOSTNAME, but it gets a bit more annoying for, say, ~/.Xdefaults;

I am quite sure most people using many different hosts have all their own way to deal with that. There are too many use cases for one solution to be practical for everybody.

I already made public a small script to distributed SSH public keys, that I was using for quite a while before. Next is the script I am using now to distribute home configuration files among hosts: it needs to be added within a git repository (in my case, gitlab “rc” repository), from there, based on a pre-decided list of files or directories:

  • keep a copy of each file/directory per hostname (ex: bashrc.$HOSTNAME, config/awesome.$HOSTNAME);
  • default can be set by renaming $item.$HOSTNAME to $item.default of such file/directory (ex: bashrc.default);

It obeys to the following general rules:

  • it wont copy symlinks but their content;
  • if we only have a local file, save it in the repository;
  • if we have a local file and a repository copy, and if there is a difference, update the repository;
  • if we only have a repository copy, no local file, create the local file with a warning;

Regarding $item.default:

  • $item.default  be will used only unless a $item.$HOSTNAME exists;
  • $item.default will never be updated automatically: if the local copy based on the default is modified, then a $item.$HOSTNAME will be created instead; if it is to made default, you’ll need to rename $item.$HOSTNAME to $item.default; alternatively, you could edit $item.default first and remove the local file at once;
  • similarly, $item.default will never overwrite a local file: to use it on other hosts after an update, the local file will need to be removed;

I admit this $item.default handling is a bit cumbersome but these files update presents risk (lockout, security, etc).

If updaterc exists in the same directory, it will be sourced. It is convenient way to change the $ITEMS variable without editing the script itself.

To use it, you just need to set up and clone some git repository and, within this repository:

chmod +x
# eventually create a custom list of items
echo 'ITEMS="bashrc config/awesome"' > updaterc
# run

The task can be automated by a cronjob, add the following to a call to crontab -e:

3 12 * * * ~/.rc/ >/dev/null 2>/dev/null

(side note: that won’t work properly if one of your hosts is named “default”)

Build a simple kitchen terminal out of an old laptop screen and Raspberry Pi

On some occasion, it is practical to have a terminal in the kitchen, mainly to check on recipes. While a phone screen is not that great, a tablet would do. I do not have any tablet and I am not that fond of systems readily available on tablets. But I do have a few old laptops around plus a Raspberry Pi B+.


The following RasPi.TV‘s video explains it all:

Quite straightforward, you unmount and identify your screen. So for my Dell Latitude C640, I got a Samsung LTN141X8-L02 14,1″ screen for which I easily found a controller board kit on ebay for 21,5 €.IMG_20170122_115114.jpg

Here’s the back of the said screen, with the original inverter board still attached. The kit will include another one.

Once acquired, there is not much to think about, everything just have to be plugged where it belongs according to the seller docs:


Obviously, you need to buy also a HDMI cable and a power adapter power adapter (12V, 4A).


Obviously, as it is no tablet, it requires peripherals. I opted for a slim USB wireless keyboard with trackpad and some USB powered stereo speaker. These devices will be powered by the Raspberry Pi (a phone charger can be plugged to the keyboard to recharge it).

Finally, the charger and the Raspberry are plugged onto a power socket with 5V USB. It will be used to put on/off the whole.

Afterwards, I put the screen within a cheap photo frame and fixed the rest on some board.

That frame looks too fragile, though, I would recommend to build a proper one instead.


1/ Raspbian desktop

I first tried some default Raspbian. Epiphany web browser is as bad as you cannot even set a default webpage without editing the .desktop files. And once it is done, it crashes on mediawiki standard page layout. Raspbian also fails to properly open videos (OMX sprout error messages, even with lot of memory attributed). Not convincing.

2/ Kodi media player

Afterwards, I went for LibreElec along with Kodi. Surprisingly, it loads movies with no problem, the interface is quite neat in general and the control with a distant web browser (port 8080 by default) is a plus. As media player, it would be nice.


But it is not perfect: Kodi does not provide any proper web browser, even lacking features. They only provide some cheezy sort of said text web browser. Sort of because it is no lynx/links/elinks, it is just a strange graphical interface with low HTML layout capabilities – but, kudos, it does not crash on mediawiki, yay! Nonetheless, that is quite a blocker issue for me. Even a media player, in my opinion, should have integrated web browser. It is not a challenge to reuse gecko/khtml, whatever, to make so.


3/ (tiger) VNC on top of Raspbian

So I went back on Raspbian. I found out that netsurf works fine to browse mediawiki. So just that satisfies the first requirement.

Instead of expecting to be able to finely setup Raspbian for video website, etc, I decided it might just be smarter to really think of this as terminal and so, to show some window from another computer session.

On an Devuan desktop, it is just enough to get tigervnc-scraping-server, generate a host file (for IP based control):

mkdir .vnc
echo "+IP_OF_YOUR_RASPBIAN" > .vnc/hosts

then to start it whenever you want to share your screen:

x0vncserver -HostsFile=$HOME/.vnc/hosts -SecurityTypes=None

Windows version is configured in a similar fashion.

Raspbian provides a VNC viewer graphical interface that will allow you to connect and you’ll immediately notice that TigerVNC is damned efficient and play with no problem youtube video, etc.

Ok, but VNC, while much more convenient than RDP to setup, does not care to sound forwarding.

I give some tries to PulseAudio RTP capabilities: it fails with errors like [alsa-sink-bcm2835 ALSA] module-rtp-recv.c: Sample rates too different, not adjusting (44100 vs. 90522) and when I tried to document myself about it, I found that this PulseAudio feature was bugged, flooding the network with UDP packets, a bug found in 2009 and still existing in 2017. Gosh, a feature bugged since near to a decade: back to why I try to keep away from systemd and anything made by the same crowd.

I ended up streaming audio with vlc,

cvlc -vvv pulse://`pactl list | grep "Monitor Source" | cut --delimiter ":" -f 2 | tr -d [:blank:]` --sout "#transcode{acodec=mp3,ab=128,channels=2}:standard{access=http,dst=}" &

simply played on the Raspbian with:

mpg123 http://hostname:9999/pc.mp3

I has been summarized in a script to be run on the distant host side. I considered stream both audio and video with vlc but it  is convenient to be able to move around with VNC. This will require further testing.

Sharing graphs of multiple Munin (master) instances

Munin is a convenient monitoring tool. Even if it gets old, it is easy to set up and agrement with custom scripts.

It works with the notion of having a master munin process that will grab data from nodes (a device within the network), store it in Round-robin databases (RRD) and process the data  to generate static images and HTML pages. These sequences are split in several scripts: munin-update, munin-limits, munin-graph, munin-html.

It’s fine -overkill?- for a small local network, despite the fact RRD is a bit I/O consuming to the point it may be require to use a caching daemon like rrdcached.

It’s a different story if you want to monitor several small networks that are connected through the internet at once. Why would you? First because it might be convenient to get graphs from different networks side by side. Also because if one network disappear from the internet, data from munin might actually be meaningful, provided you can still access it.


Problem is munin updates are synchronous: any disconnect between the two would cause the data to be inconsistent. It leads  to many issues that munin-async can help with. But even though you might be able to use munin-async, one of your servers will lack a munin master: the setup will works only when both are up.

So I’m actually much more interested in having a master munin process, for each network.

How to achieve that? It is not an option to share RRD via NFS over the web. I’m also not fan of the notion of having both master munin process read through all RRD and generate graphs in parallel, re-generating exactly the same data with no value added.

I went for an alternative approach with a modified version of the script. We do not merge RRD trees. We simply synchronize the db files to merge and the generated graphs. So if there are graphs from another munin master process to include in the HTML output, they’ll be there. But munin master process will go undisturbed by any other process unavailability and wont have more RRD to process, more graphs to produce.

Graphs and db files replication:

On both (master munin process) hosts, you need an user dedicated to replication: here.

adduser SYNCUSER munin

This user need ssh access from one host to the other (private/public key sharing, whatever).

Directories setup:

mkdir -p /var/lib/munin-mergedb/
chown munin:munin -R /var/lib/munin-mergedb/
# the +s is very important so directory group ownership is preserved
chmod g+rws -R /var/lib/munin-mergedb/
chmod g+rws /var/lib/munin/
chmod g+rws -R /var/www/html/munin/

On one host (the one allowed to connect through ssh), synchronized two way with unison HTML files:

su - SYNCUSER --shell=/bin/bash



# step one, get directories
unison -batch -auto -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML"
# step two, get directories img content 
cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR"; done

On one host (the same), synchronized one way with rsync database files:


# push our db (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/"
# get theirs (one way action, easier with rsync)
rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"

If it works fine, set up /etc/cron.d/munin-sync:

# supposed to assist




# m h dom mon dow user command
# every 5 hour update dir list
01 */5 * * *  SYNCUSER unison -batch -auto -silent -log=false -ignore="Name *.html" -ignore="Name *.png" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null

#  update content twice per hour
*/28 * * * *  SYNCUSER cd "$LOCAL_HTML" && for DIR in *; do [ -d "$DIR" ] && unison -batch -auto -silent -log=false -ignore="Name *.html" "$LOCAL_HTML/$DIR" "ssh://$DISTANT_HOST:$DISTANT_PORT/$DISTANT_HTML/$DIR" 2>/dev/null; done && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$LOCAL_DB/" "$DISTANT_HOST:$DISTANT_LOCAL_DB/" 2>/dev/null && rsync -a --include='datafile*' --include='limits*' --exclude='*' -e "ssh -p $DISTANT_PORT" "$DISTANT_HOST:$LOCAL_DB/" "$LOCAL_DISTANT_DB/"2>/dev/null

Updated scripts:

Once data there, you will need munin-mergedb script to handle them, use a munin-cron script like my instead of munin-cron so it actually calls Plus you’ll need a fixed version of munin-graph so –host arguments are not blattlanly ignored (lacking RRD, it would fail to actually write graph for distant munin master process, but it would nonetheless delete existing graphs).

(Where these files go depends on your munin installation packaging. I have the munin processes in /usr/local/share/munin  and in /usr/local/bin – it reflects the fact that original similar files are either in /usr/share/munin or /usr/bin. Beware, if you change the name of any munin process, update log rotation files otherwise you may easily fill up a disk drive, since it is kind of noisy especially when issues arise)

As conveniency, you can download these with my -utils-munin debian/devuan packages:

dpkg -i
apt-get update
apt-get install stalag13-utils-munin

Once everything set up, you can test/debug it by typing:

su - munin --shell=/bin/bash


What next?

Actually I’d welcome improvements since it extract –host information in the most barbaric way. I am sure it can be done cleanly using Munin::Master::Config/else.

Then I’d welcome any insight about why munin-graph’s –host option does not works the way I’d like it. Maybe I misunderstand it’s exact purpose. The help reads:

 --host  Limit graphed hosts to . Multiple --host options
               may be supplied.

To me, it really means that it should not do anything at all to any files of hosts excluded this way. If it meant something else, maybe this should be explained.

Avoiding GPG issues while submitting to popularity-contest on Devuan

For some reason, on Devuan, popularity-contest submits fails with:

gpg: 4383FF7B81EEE66F: skipped: public key not found
gpg: /var/log/ encryption failed: public key not found

The Debian Popularity Contest being described as an attempt to map the usage of Debian packages, I think useful that it also get stats from disgrunted Debian users forced to use a fork of the same general scope.

I do not think it data transmitted in this context is really sensitive. So the simplest hack is just to set off encryption by adding to /etc/popularity-contest.conf:


Preventing filenames with semicolon “:” to be garbled by Samba

In some cases, Samba garble file names, as backward compatibility with old Microsoft Windows system that cannot handle long filenames or filenames with specific characters. It would then be shown with the form XXXXX~X.ext

You can switch off this mecanism:

In /etc/samba/smb.conf, in [Global], add:

mangle case = no
mangled names = no

Then simply restart Samba (invoke-rc.d samba restart).

Files then will be listed with the real name. Not sure Microsoft Windows will however allow you to open the files.