Cloning installed packages list over LXC containers with apt-clone

apt-clone is quite convenient to run LXC containers with the same set of installed packages.

here’s a short bash function to do run apt-clone on a list of containers to synchronize them all:

function lxc-clone {
    MKTEMP=`mktemp --dry-run` 
    
    guests=($(lxc-ls --active))

    # first get clones for each
    for guest in "${guests[@]}"; do
	echo -e "[${shell_datecolor}$(date +%H:%M:%S)${shell_clear} ${shell_containercolor}$guest:${shell_clear} ${shell_promptcolor}#${shell_clear} ${shell_invert}apt-clone clone $@${shell_clear}]"
	lxc-attach -n "$guest" -- apt-clone clone "$MKTEMP.$guest"
	cp -v `lxc-config lxc.lxcpath`/"$guest"/rootfs"$MKTEMP.$guest".apt-clone.tar.gz "$MKTEMP.$guest".apt-clone.tar.gz
    done

    # then do a restore of all in each
    for guest in "${guests[@]}"; do
	echo -e "[${shell_datecolor}$(date +%H:%M:%S)${shell_clear} ${shell_containercolor}$guest:${shell_clear} ${shell_promptcolor}#${shell_clear} ${shell_invert}apt-clone restore $@${shell_clear}]"
	for guestwithin in "${guests[@]}"; do
	    echo "=> ...$guestwithin"
	    cp -v "$MKTEMP.$guestwithin".apt-clone.tar.gz `lxc-config lxc.lxcpath`/"$guest"/rootfs"$MKTEMP.$guestwithin".apt-clone.tar.gz	    
	    lxc-attach -n "$guest" -- apt-clone restore "$MKTEMP.$guestwithin".apt-clone.tar.gz
	    rm -fv `lxc-config lxc.lxcpath`/"$guest"/rootfs"$MKTEMP.$guestwithin".apt-clone.tar.gz
	done
	
    done    

    rm -f "$MKTEMP".*.apt-clone.tar.gz
}

The variable $guest sets which LXC containers to work on. Here, it works on all active containers.

(the color variables are set in stalag13-00-shell.sh but arent required)

Setting up LXC containers with mapped GID/UID

Result of ps aux on a LXC host is quite messy! But that can be improved, with the benefit of having each LXC container using a specific namespace: for instance « having a process is unprivileged for operations outside the user namespace but with root privileges inside the namespace ». Easier to check on and likely to be more secure.

A reply to the question « what is an unpriviledged LXC container » provides a working howto.  The following is a proposal to implement it even more easily.

For each LXC container, you need to pick a UID/GID range. For instance, for container test1, let’s pick 100000 65536. It means that root in test1, will actually be 100000 on the main host. User 1001 in test1 will be 101001 on the main host and so on.

So you must add the map on the main host:

 usermod --add-subuids 100000-165535 root
 usermod --add-subgids 100000-165535 root

Then you must configure the relevant LXC container configuration file whose location varies according to you lxc.lxcpath.

# require userns.conf associated to the distribution used
lxc.include = /usr/share/lxc/config/debian.userns.conf

# specific user map
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

Then you need to update files ownership according to the new mapping. Original poster proposed a few shell commands but that would only be enough to start the container. Files within the container would not get inappropriate ownership: most likely, files that belongs to root/0 on the host would show up as owned by nobody/65534. For proper ownership to root/0 within the LXC container, they need to belong to 100000 on the the host.

Here comes my increase-uid-gid.pl script: it’ll take as argument your LXC container name (or alternatively a path, useful for mounts that are residing outside of it) and value to increment. In the first case, it’ll be 100000:

# shutting down the container before touching it
lxc-stop --name test1 

# obtain the script
cd
wget https://gitlab.com/yeupou/stalag13/raw/master/usr/local/bin/increase-uid-gid.pl
chmod +x ~/increase-uid-gid.pl

# chown files +100000
~/increase-uid-gid.pl --lxc=test1 --increment=100000

# start the container
lxc-start --name test1

That’s all. Obviously, you should check that every daemon is still functionning properly. If not, either it means a file owership changed was missed (happened once to a container with transmission-daemon) or maybe its mode was not properly set beforehand (happened once to a container with exim4 that was not setuid – it led to failure with procmail_pipe).

Next container test2? Edit `lxc-config lxc.lxcpath`/test2/config:

# require userns.conf associated to the distribution used
lxc.include = /usr/share/lxc/config/debian.userns.conf

# specific user map
lxc.id_map = u 0 200000 65536
lxc.id_map = g 0 200000 65536

Then run:

lxc-stop --name test2
usermod --add-subuids 200000-165535 root
usermod --add-subgids 200000-165535 root
~/increase-uid-gid.pl --lxc=test2 --increment=200000
lxc-start  --name test2

I tested the script on 16 LXC containers with no problem so far.

If you need to deal with extra mounted directories (lxc.mount.entry=…), use –path option.

If you need to deal with a container that was already mapped (for instance already 100000 65536 but you would like it to be 300000 65536), you’ll need to raise the –limit that is by default equal to increment value: that would be –increment=200000 –limit=300000. This limit exists so you can re-run the script on the same container with no risk of having files getting out of range.

For the record:

For the record, follows the script as it is today (but it always best to get latest version from gitlab – because I wont update any bugfixes/improvements on this page) :

#!/usr/bin/perl

use strict;
use File::Find;
use Getopt::Long;

### options
my ($getopt, $help, $path, $lxc, $increase, $limit);
eval {
    $getopt = GetOptions("help" => \$help,
			 "lxc=s" => \$lxc,
	                 "path=s" => \$path,
	                 "increase=i" => \$increase,
	                 "limit=i" => \$limit);
};

if ($help or
    !$increase or
    (!$path and !$lxc)) {
    # increase is mandatory
    # either path or lxc also
    # print help if missing
        print STDERR "
  Usage: $0 [OPTIONS] --lxc=name --increase=100000
             or
         $0 [OPTIONS] --path=/directory/ --increase=100000

Will increase all files UID/GID by the value set.

      --lxc=name    LXC container name, will be used to determine path
      --path=/dir   No LXC assumption, just work on a given path
    
      --increase=n  How much to increment
      --limit=n     Increase limit, by default equal to increase

Useful for instance when you add to a LXC container such config:
  lxc.id_map = u 0 100000 65536
  lxc.id_map = g 0 100000 65536

And the host system having the relevant range set: 
  usermod --add-subuids 100000-165535 root
  usermod --add-subgids 100000-165535 root

It would update UID/GID within rootfs to match the proper range. Note that
additional configured mount must also be updated accordingly, using --path 
for instance.

By default, limit is set to increase value so you can run it several time on 
the same container, the increase will be effective only once. You can set the
limit to something else, for instance if you want to increase by 100000 a 
container already within the 100000-165536 range, you would have to 
use --increase=100000 --limit=200000.

This script is primitive: it should work in most case, but if some service fail
to work after the LXC container restart, it is probably because one or several 
files were missed.

Author: yeupou\@gnu.org
       https://yeupou.wordpress.com/
";
	exit;
}

# limit set to increase by default
$limit = $increase unless $limit;

# if lxc set, use it to define path
if ($lxc) {
    my $lxcpath = `lxc-config lxc.lxcpath`;
    chomp($lxcpath);
    $path = "$lxcpath/$lxc/rootfs";
}

# in any case, path must be given and found
die "path $path: not found, exit" unless -e $path;
print "path: $path\n";


### run
find(\&wanted, $path);

# if lxc, check main container config
if ($lxc) {
    my $lxcpath = `lxc-config lxc.lxcpath`;
    chomp($lxcpath);
    
    # https://unix.stackexchange.com/questions/177030/what-is-an-unprivileged-lxc-container
    # directory for the container
    chown(0,0, "$lxcpath/$lxc");
    chmod(0775, "$lxcpath/$lxc");
    # container config
    chown(0,0, "$lxcpath/$lxc/config");
    chmod(0644, "$lxcpath/$lxc/config");
    # container rootfs - chown will be done during the wanted()
    chmod(0775, "$lxcpath/$lxc/rootfs");
}


exit;

sub wanted {
    print $File::Find::name;
    
    # find out current UID/GID
    my $originaluid = (lstat $File::Find::name)[4];
    my $newuid = $originaluid;
    my $originalgid = (lstat $File::Find::name)[5];
    my $newgid = $originalgid;
    
    # increment but only if we are below the new range
    $newuid += $increase if ($originaluid < $increase);
    $newgid += $increase if ($originalgid < $increase);

    # update if there is at least one change
    if ($originaluid ne $newuid or
	$originalgid ne $newgid) {
	chown($newuid, $newgid, $File::Find::name);
	print " set to UID:$newuid GID:$newgid\n";
    } else {
	print " kept to UID:$originaluid GID:$originalgid\n";
    }
      
}

# EOF

Setting up LXC containers to run with ISC DHCPd and BIND instead of dnsmasq, along with domain name spoofing/caching

By default, lxc-net setup containers to work along with dnsmasq, which provides both DNS and dhcpd services, name resolution and IP attribution.

Recommended setup of lxc-net includes /etc/lxc/dnsmasq.conf that only states  “dhcp-hostsfile=…” and the said dhcp-hostfiles as /etc/lxc/dnsmasq-hosts.conf with a line “hostname,IP” per host.

It works fine and there is no real reason to use anything else. Though it is obvious that lxc-net lacks a bit of modularity, since it is clearly tied, hardcoded, to dnsmasq for instance.

Except that on my main server, I already have ISC DHCPd serving IP to local area network and BIND 9 not only doing name resolution caching but also name resolution for said local area network. Not only having both dnsmasq and BIND 9 and ISC DHCPd is a bit overkill, but it requires additional config to bind them to specific interfaces to avoid conflicts.

dnsmasq shutdown

We could simply do a killall dnsmasq and comment the part in /usr/lib/x86_64-linux-gnu/lxc/lxc-net where it get started. For now, we’ll just prevent it from messing with interfaces, setting /etc/lxc/dnsmasq.conf to:

interface=lxcbr0
no-dhcp-interface=lxcbr0

Initial setup

This article assumes you already have BIND and ISC DHCPd set up for local area network (otherwise, as said, in most use cases, dnsmasq will be just fine).

If you do not have a preexisting setup but wants, nonetheless, switch to BIND 9 and ISC DHPCd, you could start with the bind setup provided in my setting up a silent/low energy consumption home server article.

This article includes dynamic clients name update. The only thing to pay attention is that this setup use 10.0.0.0/24 for local area network whereas, in the following article, 10.0.0.0/24 will be used for LXC bridge network while 192.168.1.0/24 will be dedicated to local area network.

DNS setup

I adjusted my preexisting setup (bind9 files part of my -utils-cache-spoof debian package, which I suggest you look at directly to have their current exhaustive content) based on bind9 notion of ACL (access control list) depending on which network clients belongs and, subsequently, bind9 notion of “views” that configure which zones are provided to these clients according to ACL.

The following will seems like a lot but, if you grab my debian -utils-cache-spoof package, it is actually not that much.

Since LXC bridge here is using 10.0.0.0/24 network, I have in named.conf.acl:

[...]

acl lan {
    // the cache host IP should not be part of regular lan ACL
    !10.0.0.88;
    // private IPv4 address spaces
    10.0.0.0/8;
    172.16.0.0/12;
    192.168.0.0/16;
};

acl lannocache {
   // counterpart of earlier statement: cache host needs proper unspoofed name resolution
   10.0.0.88;
};

Note that the .88 container IP is dedicated to caching (apt/steam as in my previous setup with dsniff as spoofer and my another setup using bind9 instead but outside of LXC host/container context) so it needs to be excluded from the general 10.0.0.0/8 ACL.

These ACL are in turn used in named.conf.views (Update: with latest versions of Bind9, we cannot include twice a file that as allow-update statement within, hence the …local and .local_ref):

// clients are set in named.conf.acl
include "/etc/bind/named.conf.acl";

// loopback view, for the server itself
view "loopback" {
 match-clients { loopback; };
 include "/etc/bind/named.conf.default-zones";
 include "/etc/bind/named.conf.local";
 include "/etc/bind/named.conf.ads";
};

// otherwise local network area
view "lan" {
 match-clients { lan; };
 include "/etc/bind/named.conf.default-zones";
 include "/etc/bind/named.conf.local_ref";
 include "/etc/bind/named.conf.cache";
 include "/etc/bind/named.conf.ads";
};

// local network area without cache, for host that will get unspoofed name resolution
// (needs to be set up one by one in named.conf.acl)
view "lannocache" {
 match-clients { lannocache; };
 include "/etc/bind/named.conf.default-zones";
 include "/etc/bind/named.conf.local_ref";
 include "/etc/bind/named.conf.ads";
};


[...]

Obviously, if there was no notion of caching (and name spoofing), the setup would be even more straightforward, a single view would be enough. Nonetheless, this example shows an easy way to treat differently hosts depending whether they are LXC containers or regular LAN clients.

About the zones included (or not) in views (all files being in /etc/bind):

  • named.conf.default-zones is standard ;
  • named.conf.local is almost standard, you need to define here your local domains/network ;
  • Update:  named.conf.*_ref  is required with recent version of Bind9 to be able to use twice content for some named.conf.* in which some zone file is defined and can be updated (allow-update) : you’ll will need to use in-view feature to mimic usage of the view that previously defined it since trying another include would sprout writeable file ‘…’ already in use  ;
  • named.conf.cacheBASEIP contains list of spoofed domains, the one we want to cache, generated by named.conf.cache-rebuild.sh, BASEIP being optional;
  • named.conf.ads contains ads servers blacklist generated by update-bind-ads-block.pl ;

So basically, you need to edit /etc/bind/named.conf.local to something like:

// to store A/CNAME records for DOMAIN.EXT
zone "DOMAIN.EXT" {
 type master;
 notify no;
 file "/etc/bind/db.DOMAIN.EXT";
 allow-update { key ddns; };
};

// (we use 192.168.1.0/24 for regular LAN)
// to store PTR records (IP to name) for regular LAN 
zone "1.168.192.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.192.168.1";
 allow-update { key ddns; };
};

// (we use 10.0.0.0/24 for LXC bridge)
// to store PTR records for LXC bridge)
zone "0.0.10.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.10.0.0";
 allow-update { key ddns; };
};

Update: since recent Bind9 update, to be able to reuse these zones in another view, you’ll need to edit /etc/bind/named.conf.local_ref to something like:

// simple reference to previously defined zones for view loopback in named.conf.local
zone "DOMAIN.EXT" { in-view "loopback"; }; 
zone "1.168.192.in-addr.arpa" { in-view "loopback"; }; 
zone "0.0.10.in-addr.arpa" { in-view "loopback"; };

You also require relevant db. files: for instance db.ads pointing to loopback to filter ads/spam sources, db.cache pointing to the cache container .88 (possibly also db.cacheBASEIP) and local db. files as db.DOMAIN.EXT:

$ORIGIN .
$TTL 86400 ; 1 day
DOMAIN.EXT IN SOA server.DOMAIN.EXT. root.DOMAIN.EXT. (
                        2823 ; serial
                        28800 ; refresh (8 hours)
                        7200 ; retry (2 hours)
                        604800 ; expire (1 week)
                        10800 ; minimum (3 hours)
                        )
          NS     server.DOMAIN.EXT.
          MX     10 server.DOMAIN.EXT.
$ORIGIN DOMAIN.EXT.
server    A      192.168.1.1
; the rest will be filled by ddns

Likewise, you should have db.192.168.1 and db.10.0.0 (obviously with 1.168.192 replaced by 0.0.10) as:

$ORIGIN .
$TTL 86400 ; 1 day
1.168.192.in-addr.arpa IN SOA server.DOMAIN.EXT. root.DOMAIN.EXT. (
                       2803 ; serial
                       28800 ; refresh (8 hours)
                       7200 ; retry (2 hours)
                       604800 ; expire (1 week)
                       10800 ; minimum (3 hours)
                       )
           NS      server.DOMAIN.EXT.
$ORIGIN 1.168.192.in-addr.arpa.
1          PTR     server.DOMAIN.EXT.
; the rest will be filled by ddns too

And then you must run the scripts to generate named.conf.cacheBASEIP and name.conf.ads. You’ll probably need to edit /etc/bind/named.conf.cache-rebuild.sh variables according to what you are actually caching.

BIND gets updates from ISC DHCPd whenever a new clients get a lease, it is configured in name.conf.dhcp (not packaged):

include "/etc/bind/ddns.key";

controls {
 inet 127.0.0.1 allow { localhost; } keys { ddns; };
};

The ddns key was generated as documented in my setting up a silent/low energy consumption home server article as well as in Debian docs:

# dnssec-keygen -a HMAC-MD5 -b 128 -r /dev/urandom -n USER ddns

Out of the generated Kdhcp_updater.*.private, you get the content of the “Key:” statement and you put it in /etc/bind/ddns.key:

key ddns {
 algorithm HMAC-MD5;
 secret "CONTENTOFTHEKEY";
};

So this setup implies that your named.conf looks like:

include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.dhcp";
include "/etc/bind/named.conf.views";

Besides, my /etc/bind/named.conf.options is generated by /etc/dhcp/dhclient-exit-hooks.d/bind so it include proper forwarders and listen-on exception.

That should cover it for BIND.

ISC DHPCd setup

In my case, I still still want IPs of LXC containers to be fixed. The syntax of /etc/lxc/dnsmasq-hosts.conf was “hostname,IP” per line which is more convenient than ISC DHCPD syntax  “host hostname { hardware ethernet MAC ADDRESS; fixed-address IP; }”.

I decided to use the same /etc/lxc/dnsmasq-hosts.conf symlinked to /etc/lxc/hosts.conf that will be used by the /etc/lxc/dhcpd-hosts.rebuild.sh (not packaged for now) script to generate /etc/dhcp/dhcpd_lxc-hosts.conf:

#!/bin/bash
# /etc/lxc/dhcpd-hosts.rebuild.sh

HOSTS=/etc/lxc/hosts.conf # similar to dnsmasq-hosts.conf: host,IP
DESTINATION=/etc/dhcp/dhcpd_lxc-hosts.conf
LXC_PATH=`lxc-config lxc.lxcpath`
cd $LXC_PATH

echo > $DESTINATION
for container in *; do
 if [ ! -d "$container" ]; then continue; fi
 if [ ! -e "$container/config" ]; then continue ; fi
 echo "host lxc-$container {" >> $DESTINATION
 echo " hardware ethernet "`cat "$container/config" | grep lxc.network.hwaddr | cut -f 2 -d "="`";" >> $DESTINATI
ON
 echo " fixed-address "`cat "$HOSTS" | grep "$container" | cut -f 2 -d ","`";" >> $DESTINATION
 echo "}" >> $DESTINATION 
done
# EOF

This primitive script will sprout out a proper ISC DHCPd host file. You have to run it each time you create a new container. Once done, we simply edit /etc/dhcp/dhcpd.conf:

# The ddns-updates-style parameter controls whether or not the server will
# attempt to do a DNS update when a lease is confirmed. We default to the
# behavior of the version 2 packages ('none', since DHCP v2 didn't
# have support for DDNS.)
ddns-updates on;
ddns-update-style interim;
ddns-domainname "DOMAIN.EXT";
ddns-rev-domainname "in-addr.arpa.";
ignore client-updates; # no touching the FQDN
include "/etc/dhcp/ddns.key";

# option definitions common to all supported networks...
option domain-name "DOMAIN.EXT";
option domain-search "DOMAIN.EXT", "ANOTHERDOMAIN.EXT";
option domain-name-servers 192.168.1.1;
option routers 192.168.1.1;

default-lease-time 600;
max-lease-time 6000;
update-static-leases on;

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
authoritative;

# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;

# LAN clients
subnet 192.168.1.0 netmask 255.255.255.0 {

 # dynamic IP depends whether the client MAC address is known
 pool {
   range 192.168.1.20 192.168.1.99;
   deny unknown-clients;
 }
 pool {
   range 192.168.1.100 192.168.1.250;
   allow unknown-clients; 
 }

 # iPXE / boot on lan
 if exists user-class and option user-class = "iPXE" {
   filename "ipxe-boot";
 } else {
   filename "undionly.kpxe";
 }
 next-server 192.168.1.1;
}

# LXC clients
subnet 10.0.0.0 netmask 255.255.255.0 {
 # use the subnet-specific router
 option routers 10.0.0.1;
 # no pool, all IP are fixed here
 # force lease time to be at least weekly
 min-lease-time 604800;
 max-lease-time 604800;
 # no boot on lan either
}

# zones
zone DOMAIN.EXT. {
 primary 127.0.0.1;
 key ddns;
}
zone 1.168.192.in-addr.arpa. {
 primary 127.0.0.1;
 key ddns;
}
zone 0.0.10.in-addr.arpa. {
 primary 127.0.0.1;
 key ddns;
}


# LAN known clients 
 host trendnetusb { hardware ethernet 00:50:b6:08:xx:xx; }
 host ugreenusb { hardware ethernet 00:0e:c6:fa:xx:xx; }

# LXC host
include "/etc/dhcp/dhcpd_lxc-hosts.conf";

That’s all. Obviously, if you want your LXC containers to get completely dynamically assigned IP, you do not even need this whole host setup. You just set a pool { } with a range of IP (and remove the specif lease time).

The cache LXC container

I wont get in much details, my my -utils-cache-apt and -utils-cache-steam debian packages should work out of the box on a LXC container, providing both the necessary nginx cache-apt and cache-steam config.

If you use resolvconf and ISC DHCP clients on LXC containers, the resolvconf to nginx resolver config script will set up /etc/nginx/conf.d/resolver.conf accordingly.

If you use udhcpc, this resolvconf script will be ignored  but the default /etc/nginx/conf.d/resolver.conf includes, in comments, proposed changes to /etc/udhcpc/default.script to generate  /etc/nginx/conf.d/resolver.conf accordingly.

Otherwise, you need to hand configure /etc/nginx/conf.d/resolver.conf

## (set resolver to something else if your local interface got
## domain names spoofed, 8.8.8.8 for Google resolver for example.
#resolver 127.0.0.1 ipv6=off; # without lxc
resolver 10.0.0.1 ipv6=off;   # within lxc

Troubleshooting

I have this setup since a while and noticed the following:

  • with ISC DHCP client within the LXC containers I get the bad udp checksums in N packets issue;  the iptables -A POSTROUTING -t mangle -p udp –dport 67 -j CHECKSUM  –checksum-fill rule set up by lxc-net is helpless; the solution i picked is to use udhcpc within all LXC containers that does not trigger the problem, with the obvious drawback that the cache container must use the edited /etc/udhcpc/default.script option since resolvconf will have no effect;
  • if ISC DHPCd and Bind9, on the LXC host, are started before or at the same time as lxc and lxc-net, they may not listen on the LXC bridge interface, possibly missing at their starting time; as result, while everything could seem properly on, LXC container would fail to get an IP assigned until you restart ISC DHPCd;  this does not occur after adding lxc lxc-net in the Should-Start: part of ISC DHCPd and Bind9 init.d scripts.
  • Update: With Bind9 recent version (notably: since Debian 9.0), if you have twice a zone defined with a file that can be updated, it wont start and logs will state something like writeable file ‘…’ already in use. The workaround, using in-view, is described in the earlier. Granted, it kills a bit the interest of using view and lead to ugly confusing setup.

Avoiding dnsmasq interference

If you are satistified and do not require dnsmasq anymore, I suggest to remove any dnsmasq package and add a symlink so dnsmasq command produces no error (when called by /usr/lib/x86_64-linux-gnu/lxc/lxc-net for instance):

ln -s /bin/true /usr/bin/dnsmasq

Using networked filesystems hosted by LXC containers with Samba

For more than a decade, I used NFS on my home server to share files. I did not consider using Samba for anything but to provide Windows access to shares. NFSv3 then NFSv4 suited me, allowing per host/IP write access policy. The only main drawback was very crude handling of NFS server downtime: X sessions would be half-frozen, requiring restart to be usable once again.

However, I moved recently my servers to LXC (which I’ll probably document a bit later) and NFS server on Debian, as you can guess from nfs-kernel-server package’s name, is kernel-based: not only it apparently defeats the purpose of LXC containers to actually have a server within a container tied to the kernel, but it does not seems to really work reliably. I managed to get it running, but it had to be run on both the master host and within the container. Even then, depending which started first could make the shares unavailable to hosts.

I checked a few articles over the web (https://superuser.com/questions/515080/alternative-to-nfs-or-better-configuration-instable-network-simple-to-set-up, http://serverfault.com/questions/372151/nas-performance-nfs-vs-samba-vs-glusterfs etc) and it looked that, as of today, you can expect decent performances from Samba, as much as of NFS. That could possibly be proven wrong if I was using massively NFS, writing a lot through networked file systems, opening a big number of files simultaneously, moving big files around a lot, but I have really simple requirements: no latency when browsing directories, no latency when playing 720p/1080p videos and that’s about it.

I had already a restricted write access directory per user, via Samba, but I use it only on lame systems as temporary area: on proper systems, I use SSH/scp/rsync/git to manipulate/save files.

Dropping NFS, I have now quite a simple setup, here are relevant parts of my /etc/samba/smb.conf:

[global]

## Browsing/Identification ###

# What naming service and in what order should we use to resolve host names
# to IP addresses
 name resolve order = lmhosts host wins bcast


#### Networking ####

# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
 interfaces = eth0

# Only bind to the named interfaces and/or networks; you must use the
# 'interfaces' option above to use this.
# It is recommended that you enable this feature if your Samba machine is
# not protected by a firewall or is a firewall itself. However, this
# option cannot handle dynamic or non-broadcast interfaces correctly.
 bind interfaces only = true


#### File names ####

# remove characters forbidden on Windows
mangled names = no

# charsets
dos charset = iso8859-15
unix charset = UTF8


####### Authentication #######

# "security = user" is always a good idea. This will require a Unix account
# in this server for every user accessing the server. See
# /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
# in the samba-doc package for details.
 security = user

# Private network
 hosts allow = 192.168.1.


# You may wish to use password encryption. See the section on
# 'encrypt passwords' in the smb.conf(5) manpage before enabling.
 encrypt passwords = true

# If you are using encrypted passwords, Samba will need to know what
# password database type you are using. 
 passdb backend = tdbsam

obey pam restrictions = yes

guest account = nobody
 invalid users = root bin daemon adm sync shutdown halt mail news uucp operator www-data sshd Debian-exim debian-transmission
 map to guest = bad user

# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
 unix password sync = yes


#======================= Share Definitions =======================


realm = ...


[commun]
comment = Commun
path = /srv/common
browseable = yes
writable = yes
public = yes
guest ok = yes
valid users = @smbusers
force group = smbusers
create mode = 0660
directory mode = 0770
force create mode = 0660
force directory mode = 0770

[tmpthisuser]
comment = Données protégées
path = /srv/users/thisuser
browseable = yes
writable = yes
public = yes
valid users = thisuser
create mode = 0600
directory mode = 0700
force create mode = 0600
force directory mode = 0700
guest ok = no

 

I installed package libpam-smbpass and edited /etc/pam.d/samba as follow:

@include common-auth
@include common-account
@include common-session-noninteractive
@include common-password

For this setup to work, you need every user allowed to connect:

  • to be member of group smbusers – including nobody (or whatever the guest account is) ;
  • to have a unix password set ;
  • to be known to samba (smbpasswd -e thisuser or option -a).

If you are not interested in per user access restricted area, only nobody account will need to be taken care of.

And, obviously, files and directories ownership and modes must be set accordingly:

cd /srv/common
# (0770/drwxrwx---) GID : (nnnnn/smbusers)
find . -type d -print0 | xargs -0 chmod 770 -v
find . -type f -print0 | xargs -0 chmod 660 -v
cd /srv/users
# (0700/drwx------) UID : ( nnnn/ thisuser) GID : ( nnnn/ thisuser)
find . -type d -print0 | xargs -0 chmod 700 -v
find . -type f -print0 | xargs -0 chmod 600 -v
# main directories, in addition, need sticky bit some future directory get proper modes
chmod 2770 /srv/common/*
chmod 2700 /srv/users/*

To access this transparently over GNU/Linux systems, just add in /etc/fstab:

//servername/commun /mountpoint cifs guest,uid=nobody,gid=users,iocharset=utf8 0 0

This assumes that any users entitled to access files belongs to users group. If not, update accordingly.

With this setup, there is no longer any IP based specific write access set but, over years, I found out it was quite useless for my setup.

The only issue I have is with files with colon within  (“:”). Due to MS Windows limitations, CIFS list these files but access is made impossible. The easier fix I found was to actually rename these files (not a problem due to the nature of the files served) through a cronjob /etc/cron.hourly/uncolon :

#!/bin/bash
# a permanent cifs based fix would be welcomed
find "/srv" -name '*:*' -exec rename 's/://g' {} +

but I’d be interested in better options.