Setting up a silent/low energy consumption home server (DHCP/DNS/SMB/UPnP)

Most users are probably fine with their ISP modem/box that even provides an hard disk. But having it’s own home server gives full control over the process and it’s not something utterly frivolous: no storage space real limit (except budget), finely tuned firewall, etc. In the past, it was at the expense of silence, energy consumption and space, but no longer, as described here.

Hardware setup:

The hardware is the following:
– board (APU) Intel DN2800MT
– RAM: 2 x 2 Go PC8500 DDR3 SODIMM
– Hard drive: Western Digital WD Green 3,5″ – SATA III 6 Gb/s – 2 To (Caviar)
– Secondary ethernet: ST1000SMPEX (Mini PCI-E)
– Wifi: TP-Link TL-WDN4800 (PCI-E)
+ a laptop adapter (16V, 4A)
+ a small case

The APU itself have a thermal design power (TDP) inferior to 10W. The hard drive is of the “Green” typen (RPM is lower than usual, etc). It’s important to note the RAM is of the SO-DIMM type (usually for laptops) PC8500 (max frequency supported by this board/CPU) and an laptop power charger/adapter is necessary instead of a regular power supply unit. Any case designed for the mini-ITX form factor could do. Low energy consumption, silent and small.

I was, actually, looking towards Sapphire Mini xxxx hardware at first, but it’s quite painy to get it shipped. So I went instead for the Intel Nano based hardware, despite its obvious drawbacks, which are supporting SATA II instead of 3, the SODIMM 4Gb RAM max and being known to be poorly supported on the target system, which is Debian GNU/Linux. I actually don’t care much for the GPU support, 4 Gb is more than enough for a home server and SATA II acceptable enough, so it should be fine anyway.

(Obviously, you should plug a hub on the secondary ethernet otherwise you’ll only be able to connect one box over ethernet)

Software setup:

Picking softwares:

Most obvious: we’ll run Debian stable on it, so to say Wheezy, the about-to-be-released-and-frozen one. The stable model in itself makes this distro the best choice for a server: this is stable and kept secure over time.

It’s supposed to work with an heterogenous network: GNU/Linux, MS Windows, over ethernet or wireless. So we’ll want:
– OpenSSH as secure shell, for the administrator
– any dhcpd server to provide IPs on the fly
– Samba for networked filesystems – and only, as we want each box to keep it’s original setup and not getting specific
– Bind to act as DNS cache and manage the domain
– Nginx as http server to provide basic sysinfo (phpsysinfo) and basic sysadmin (mostly: reset Samba passwords and connected wireless devices surveillance)
– transmission-daemon plus my script to provide a networked BitTorrent client
– minidlna to make files available to non computer networked devices

Start with Debian netinst base install:

Obviously we’ll want some SWAP space. 2 Gb should be more than enough. Then we’ll want three ext4 filesystems. One for user data, one for the system, one for a system copy, as fallback. If we had two different disk, obviously the system copy would be the second one.

We’ll start the basic debian installation with that in mind: we’ll anyway just install the debian base stuff with OpenSSH.





Setting up basic functionalities/networking after reboot:

First, we’ll install some useful utilities:

apt-get install lm-sensors hddtemp cpufrequtils debfoster etckeeper localepurge
ethtool emacs23-nox ntp wget

Regarding sensors, you should configure hddtemp to run as a daemon listening on and run:


At this point, network devices should be known to the system. We have quite usual hardware so correct modules should already be loaded. lspci should return:

01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
02:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06)

Edit the NAME strings in /etc/udev/rules.d/70-persistent-net.rules in order to have eth0 being the internet device, eth1 and wlan1 the intranet ones, for clarity sake. You may unload and reload modules of these devices in order for them to get their definitive name.

We’ll use hostapd to provide Wifi access.

apt-get install hostapd




## base

## wifi mode

## access with WPA PSK

# hw address filter (relaxed, as it is not real security)

touch /etc/hostapd/hostapd.deny

(this enable WPA2 access, if you want also WPA1, you must set wpa=3 and uncomment wpa_pairwise)

Then we’ll configure the network, defining a different subnet for wired and wireless connectivity. Some tutorials on the web propose to bridge the wireless to the wired. We won’t do that, we actually want to be able to easily distinguish the source of any request. Regarding security, the safe bet is to assume that wireless is always on the verge of getting cracked, so it must be kept confined.
editing /etc/network/interface:

# internet
auto eth0 iface
allow-hotplug eth0
iface eth0 inet dhcp

# intranet (wired)
auto eth1 iface
eth1 inet static 

# intranet (wireless) 
auto wlan1 iface
wlan1 inet static

We need a working dhcp daemon, able to dynamically register new boxes:

apt-get install isc-dhcp-server

In /etc/default/isc-dhcp-server:

INTERFACES="eth1 wlan1"

In /etc/dhcp/dhcpd.conf:

option domain-name "mynetworkname.ici";
option domain-name-servers;
option routers;

log-facility local7;

# wired
subnet netmask {

# wireless
subnet netmask {
option routers;

(it’s best to add, as fallback, to the domain-name-servers option the defaults DNS provided by your ISP, as shown in /etc/resolv.conf)

The dhcp client must be tuned a bit, /etc/dhcp/dhclient.conf:

prepend domain-name-servers;
supersede domain-name "mynetworkname.ici";

We obviously need ip forwarding, editing /etc/sysctl.conf:


and also immediately doing a:

echo 1 > /proc/sys/net/ipv4/ip_forward

We also need iptables

apt-get install iptables-persistent
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/etc/init.d/iptables-persistent save

(I actually reused a perl script that also does some nice firewalling instead of simply doing this)

ifup eth1
ifup wlan1
invoke-rc.d hostapd restart
invoke-rc.d isc-dhcp-server restart

At this point, you should be able to log in with SSH on a distant box.

Provide local (dynamic) domain name server:

apt-get install bind9

Set up forwarders with your ISP’s DNS (as in /etc/resolv.conf) in /etc/bind/named.conf.options. Don’t bother doing that, /etc/bind/named.conf.options will be automatically generated by a script installed at the latest step. Instead, remove it so the script will make sure it is set proper at its first run:

rm -f /etc/bind/named.conf.options

You need to create zones (named as you like) in /etc/bin/named.conf.local:

zone "mynetworkname.ici" {
type master;
notify no;
file "/etc/bind/db.mynetworkname.ici";
allow-update { key dhcpupdate; };

zone "" {
type master;
notify no;
file "/etc/bind/db.10.0.0";
allow-update { key dhcpupdate; };
cd /etc/bind && cp db.local db.mynetworkname.ici


$TTL    64800
@           IN      SOA      gate.mynetworkname.ici. root.mynetworkname.ici. (
2         ; Serial
604800         ; Refresh
86400         ; Retry
2419200         ; Expire
604800 )       ; Negative Cache TTL

IN      NS      nano.mynetworkname.ici.
mynetworkname.ici.                     IN      A
mynetworkname.ici.    IN    MX         10
nano        IN    A
gate            IN      CNAME   nano
cp db.255 db.10.0


; BIND reverse data file
@       IN    SOA    nano.mynetworkname.ici. root.mynetworkname.ici. (
1                     ; Serial
604800         ; Refresh
8600               ; Retry
2419200               ; Expire
604800 ) ; Negative Cache TTL         NS  nano.mynetworkname.ici.
1.0                        PTR nano.mynetworkname.ici.

Now we add support for dynamic updates:

cd /etc/dhcp
dnssec-keygen -a hmac-md5 -b 256 -n USER dhcpupdate


key dhcpupdate {
algorithm hmac-md5;

(the secret being the latest string of .key file we’ve just generated)


ddns-domainname "mynetworkname.ici";
ddns-rev-domainname "";
ddns-update-style interim;
ignore client-updates;
update-static-leases on;

key dhcpupdate {
algorithm hmac-md5;
zone mynetworkname.ici. {
key dhcpupdate;
zone {
key dhcpupdate;

Restrict read access to files containing the secret key and restart all:

chmod o-r /etc/bind/named.conf.local
chmod o-r /etc/dhcp/dhcpd.conf
rm /etc/dhcp/Kdhcpupdate.*.key /etc/dhcp/Kdhcpupdate.*.private

invoke-rc.d isc-dhcp-server restart
invoke-rc.d bind9 restart

Put user data in place:

User data will go in /srv. So we’ll add a few symlinks, after mounting the partition.

mkdir /srv/home /srv/common
rm -r /home && ln -s /srv/home /home

We then add default dirs:

mkdir /srv/common/torrents /srv/common/download /srv/common/musique /srv/common/films /srv/common/temp
cd /srv/common && chmod a+w * -R

We’ll also make sure any new user get a ~/samba directory.

mkdir /etc/skel/samba

Make it accessible over Samba:

Users will access files with Samba: anonymous in r+w in common, user only in their ~/samba (we don’t allow direct access to ~/ to block any tampering with directories like ~/.ssh)

apt-get install samba libpam-smbpass


interfaces = eth1 wlan1
bind interfaces only = yes
security = user
invalid users = root
unix password sync = yes
pam password change = yes
map to guest = bad user
# discard filename mangling backward compatibility, see
mangle case = no
mangled names = no

comment = Données protégées
path = /srv/home/%S/samba
writable = yes

comment = Commun
path = /srv/common
browseable = yes
public = yes
force group = users
force user = nobody
guest ok = yes
writable = yes

comment = clef USB, etc
path = /media
browseable = yes
public = yes
force group = users
force user = nobody
guest ok = yes
writable = yes

We also want to use unix passwords for Samba instead of having two passwords databases.


@include common-password

Make it accessible with UPnP-AV/DLNA:

apt-get install minidlna



Once set up, we regenerate the database properly:

rm -f /var/lib/minidlna/files.db
invoke-rc.d minidlna restart

We add relevant iptables rules where SRC is the IP of your dlna client (you may want to alter this, for instance by using –source-range IP-IP instead of –src IP):

apt-get install iptables-persistent
iptables -A INPUT -i eth0 --src SRC -p udp --dport 1900 -j ACCEPT
iptables -A INPUT -i eth0 --src SRC -p tcp --dport 8200 -j ACCEPT
/etc/init.d/iptables-persistent save

Provide torrent client:

apt-get install transmission-daemon libtimedate-perl
invoke-rc.d transmission-daemon stop

mkdir /home/torrent
ln -s /srv/common/torrents /home/torrent/watch
usermod -d /home/torrent Debian-transmission

cd /usr/local/bin && wget && chmod +x
cd /etc/cron.d && wget
cd /etc/cron.weekly && wget

Edit /etc/cron.d/torrent (uncomment, check pathes – you may want to add ~/watch/ instead of ~/watch if symlinks are involveed, etc).

Edit /etc/transmission-daemon/settings.json

"alt-speed-down": 120,
"alt-speed-enabled": false,
"alt-speed-up": 1,
"blocklist-enabled": true,
"download-dir": "/srv/common/download",
"message-level": 0,
"peer-port-random-on-start": true,
"port-forwarding-enabled": true,
"rpc-authentication-required": false,
invoke-rc.d transmission-daemon start

And log rotation /etc/logrotate.d/torrent:

/srv/common/torrents/log {
rotate 2
su debian-transmission users

Provide basic info and management:

The following will provides reminders of upgrades to be performed.

apt-get install libapt-pkg-perl
cd /etc/cron.daily && wget && chmod +x apt-warn
phpsysinfo : basic system infos

phpsysinfo : basic system infos

We’ll use phpsysinfo to provide an overview of the system and a homemade script to allow distant administration.

apt-get install nginx phpsysinfo php5-cgi spawn-fcgi libfcgi-perl mysql-server libemail-sender-perl
cd /etc/init.d && wget && chmod +x php-fcgi && update-rc.d php-fcgi defaults
wget -O /usr/bin/ && wget -O /etc/init.d/perl-fcgi && chmod +x /usr/bin/ /etc/init.d/perl-fcgi && update-rc.d perl-fcgi defaults

mkdir /srv/www
ln -s /usr/share/phpsysinfo/ /srv/www/sysinfo


listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default_server ipv6only=on; ## listen for ipv6

root /srv/www;
index index.html index.htm index.php;
autoindex on;
server_name localhost nano nano.mynetworkname.ici;

# restrict to local wired network
deny all;

# pass the  scripts to FastCGI server listening on
location ~ ^/sysinfo/(.*)\.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#       # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_index index.php;
include fastcgi_params;
location /sysadmin/ {
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_na\
include fastcgi_params;


cgi.fix_pathinfo = 0;


define('PSI_ADD_PATHS', '/bin,/usr/bin,/sbin,/usr/sbin');
define('PSI_BYTE_FORMAT', 'auto_binary');
define('PSI_SENSOR_PROGRAM', 'LMSensors');
define('PSI_HDD_TEMP', 'tcp');
define('PSI_SHOW_MOUNT_OPTION', false);
define('PSI_HIDE_FS_TYPES', 'tmpfs,usbfs,devtmpfs');
define('PSI_HIDE_DISKS', '/dev/disk/by-uuid/8f7f616e-9140-4876-890a-cd6abfde837\
define('PSI_HIDE_NETWORK_INTERFACE', 'lo,mon.wlan0');
define('PSI_SHOW_NETWORK_INFOS', true);
sysadmin : admin unix/samba passwords and watch wifi connections

sysadmin : admin unix/samba passwords and watch wifi connections

Follows the specific sysadmin web interface:

apt-get install passwdqc liburi-encode-perl libdata-password-perl libdbd-mysql-perl libemail-send-perl
cd /srv/www
mkdir sysadmin

cd /srv/www/sysadmin && wget
cd /usr/local/bin && wget
chgrp www-data /srv/www/sysadmin/
chmod +x /srv/www/sysadmin/ /usr/local/bin/
chmod o-rwx /srv/www/sysadmin/ /usr/local/bin/
mysql -e "CREATE DATABASE sysadmin"
mysql -e "CREATE TABLE sambaclients (ip_address varchar(32) NOT NULL default '0', user_name text NOT NULL, PRIMARY KEY (ip_address))" sysadmin
mysql -e "CREATE TABLE wificlients (hw_address varchar(32) NOT NULL default '0', status varchar(32) NOT NULL default 'S', PRIMARY KEY (hw_address), ip_address varchar(32), hostname varchar(128))" sysadmin
mysql -e "CREATE USER 'www-data'@'localhost'"
mysql -e "SET PASSWORD FOR 'www-data'@'localhost' = PASSWORD('kdkadkda')"
mysql -e "GRANT ALL ON sysadmin.* TO 'www-data'@'localhost'"


my $db_password = "kdkadkda";


my $db_password = "kdkadkda";

It requires a cronjob to be set up in /etc/cron.d/sysadmin:

* * * * * root /usr/local/bin/
invoke-rc.d nginx restart
invoke-rc.d php-fcgi restart
invoke-rc.d perl-fcgi restart

Both http://nano/sysinfo and http://nano/sysadmin should work. The sysadmin script allows to change, on-the-fly UNIX passwords, by sending random ones by mail. It means that anyone within the intranet could sniff them out. That obviously won’t do if your legit users aren’t trustworthy.

(note : the sysadmin interface is in French but the strings can easily be translated in English. Adding gettext support would have been overkill here)

Create backup system:

With only one disk, having a redundant system is not optimal. But it’s still an okay failsafe.

The following assumes you gave a label to your root partition, something like wd2Tdebian64 here. Create a filesystem on the backup partition:

mkfs.ext4 -L wd2Tdebian64bak /dev/sda7
mkdir /mnt/sysclone

Add /etc/cron.weekly/backup-system (based on

if [ `hostname` != "nano" ]; then exit; fi

## system cloning
ignore="dev lost+found media proc run sys tmp srv"

# determines which partition is currently / by reading /etc/fstab
orig=`cat /etc/fstab | grep $sys | cut -f 1 | cut -f 2 -d = | sed 's/ //g'`
case $orig in
echo "Unable to determine whether we are currently using $sys or $bak, we found $orig. Exiting!"

# then proceed

# easy reminder of the last cloning run
date > /etc/.lastclone
echo "$orig > $dest" >> /etc/.lastclone
etckeeper commit "cloning system from $orig to $dest" >/dev/null 2>/dev/null

# mount clone system
if [ ! -d $mount ]; then exit; fi
mount -L $dest $mount

# set up ignore list
for dir in $ignore; do
touch /$dir.ignore

# do copy
for dir in /*; do
if [ -d $dir ]; then
if [ ! -e $dir.ignore ]; then
# update if not set to be ignored
/usr/bin/rsync --archive --one-file-system --delete $dir $mount/
# otherwise just make sure the directory actually exists
if [ ! -e $mount/$dir ]; then mkdir $mount/$dir; fi
rm $dir.ignore

# update filesystem data
sed -i s/^LABEL\=$orig/LABEL\=$dest/g $mount/etc/fstab

# make system bootable (use --force: gpt partition table)
/usr/sbin/grub-mkdevicemap 2>/dev/null
/usr/sbin/update-grub 2>/dev/null
/usr/sbin/grub-install --force `blkid -L $orig | tr -d [:digit:]` >/dev/null 2>/dev/null

# (sleep to avoid weird timeout after rsync)
sleep 10s

# then cleanup
umount $mount
fsck -a LABEL=$dest > /dev/null

## EOF

Final tuning: set mails, restrict SSH access, etc:

We activate exim4 for direct SMTP (and make sure the ISP does not block the relevant traffic) with the command:

dpkg-reconfigure exim4-config

Then we want some specific SSH access model. We already set up the sysadmin interface to change users password – both Samba and unix. But we actually have only one admin here. He’s own account will be the only one given SSH access. No root direct access. And he’ll be able to connect with a password only from wired intranet (eth1). Otherwise, internet (eth0) or wireless intranet (wlan1) will require a pair of SSH keys. To achieve this, we’ll actually restrict SSH to members of the staff unix group (just in case, at some point, we want to add a second one).

To achieve this easily, will plug OpenSSH into xinetd.

We have a few terminals open on the server. We shut SSH down (opened sessions wont be affected) and forbid the init script to start it anymore:

invoke-rc.d ssh stop
touch /etc/ssh/sshd_not_to_be_run

We change a bit the default configuration in /etc/ssh/sshd_config:

PermitRootLogin no
X11Forwarding no
AllowGroups staff
PasswordAuthentication no

We add the relevant user to the group:

adduser thisguy staff

Then we set up xinetd to run it:

apt-get install xinetd

Edit /etc/xinetd.d/ssh_intranet:

# To work, sshd must not run by itself, so /etc/ssh/sshd_not_to_be_run
# should exists

# only from local wired network
service ssh
socket_type     = stream
protocol        = tcp
wait            = no
user        = root
bind            =
only_from    =
server          = /usr/sbin/sshd
server_args     = -i -o PasswordAuthentication=yes
log_on_success  = HOST USERID

# from local wireless network
service ssh
socket_type     = stream
protocol        = tcp
wait            = no
user        = root
bind            =
only_from       =
server          = /usr/sbin/sshd
server_args     = -i
log_on_success  = HOST USERID


This set up only access for intranet interfaces (eth1 and wlan1 if you named them as recommended in this page). Internet interface IP is obtained with DHCP so it would be a pain in the ass to keep it up to date, especially if we’re behind a dynamic IP. However, xinetd does not allow to set interface by device name but wants an IP. So we need to script this. And, at the same time, we’ll deal with Bind DNS forwarders so it does proper caching. So we’ll add /etc/dhcp/dhclient-exit-hooks.d/xinetd-bind:


# SSH over xinetd requires the IP to be hardcoded
if [ -n "$new_ip_address" ]; then 

    # change only if we have a new ip and if this one mismatch the old
    if [ "$new_ip_address" != "$old_ip_address" ] || 
         [ ! -e $XINETD_CONFFILE ]; then

        echo "# DO NOT EDIT, automatically generated by $0
# (IP changed from $old_ip_address to $new_ip_address)
# `date`
service ssh
  socket_type     = stream
  protocol        = tcp
  wait            = no
  user            = root
  bind            = $new_ip_address
  server          = /usr/sbin/sshd
  server_args     = -i
  cps             = 30 10
  per_source      = 5
  log_on_success  = HOST USERID


        # now reload xinetd
        invoke-rc.d xinetd restart >/dev/null 2>&1

# Bind DNS cache need forwarders similar to the content of resolv.conf
if [ -n "$new_domain_name_servers" ]; then
    # change only if we have DNS
    if [ "$new_domain_name_servers" != "$old_domain_name_servers" ] ||
        [ ! -e $XINETD_CONFFILE ]; then

         echo "// DO NOT EDIT, automatically generated by $0
// (IPs changed from $old_domain_name_servers to $new_domain_name_servers) 
// `date`
options {
        directory \"/var/cache/bind\";

        // If there is a firewall between you and nameservers you want
        // to talk to, you may need to fix the firewall to allow multiple
        // ports to talk.  See

        // If your ISP provided one or more IP addresses for stable 
        // nameservers, you probably want to use them as forwarders.  
        // Uncomment the following block, and insert the addresses replacing 
        // the all-0's placeholder.

        forward first;
        forwarders {" > $BIND_CONFFILE

        # add valid forwarders
        for server in $new_domain_name_servers; do
            # (verbose) skip local ips
            if [ ! -n "`ifconfig | grep ":$server "`" ]; then 
                echo "                $server;" >> $BIND_CONFFILE
                echo "                //SKIP THIS LOCAL IP! $server;" >> $BIND_CONFFILE


        echo "        };

        auth-nxdomain no;    # conform to RFC1035
        listen-on-v6 { any; };

       # now reload bind
       # (this may be useles because another script may do that already)
       invoke-rc.d bind9 restart >/dev/null 2>&1

It should modify conffiles and restart daemons only if there is an actual change. You can test that it works properly doing:

ifdown eth0 && ifup eth0

Then you can make a few SSH login test and see results in /var/log/auth.log.

At this point, you should realize that this perfectly working setup has an obvious drawback: if you’re wirelessly connected (subnet `ssh nano` will, thanks to the DNS, actually do a `ssh`. And per our xinetd rules, you’ll get kicked out, as we accept on this IP only clients from the same subnet ( So you’ll have to manually type ssh to be able to connect. We’ll add an iptable rule to fix this: we’ll say that whenever we try to connect to over ssh from wireless interface, we’ll redirect to same port. So we’ll do:

iptables -t nat -A PREROUTING -p tcp -i wlan1 --destination --dport 22 -j DNAT --to
/etc/init.d/iptables-persistent save


Update 1: Yeah, just published and already patched. Ahem. I noticed that, on reboot, sometimes hostapd is not working as expected. Users can connect but never get an IP. The LSB  init script of hostapd looks odd to me, since it actually makes it starting before dhcpd. I modified /etc/init.d/hostapd so isc-dhcp-server $network gets in Required-Start and then ran rc-update.d hostapd.

Update 2: /media was configured to be served over Samba but no automount was set for USB mass storage devices. Here it is, (not thoroughly tested as I don’t use such devices much), edit /etc/udev/rules.d/80-removable-usb.rules:

ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}!="", SYMLINK+="usb%k"
ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}!="", RUN+="/bin/mkdir /media/usb%k"
ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}=="vfat|ntfs", ENV{mount_extra_options}="dmask=0000,fmask=0111,"
ACTION=="add", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}!="", RUN+="/bin/mount -t auto -s -o $env{mount_extra_options}noatime,nodiratime,noexec,nodev /dev/usb%k /media/usb%k", OPTIONS="last_rule"
ACTION=="remove", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}!="", RUN+="/bin/umount /media/usb%k"
ACTION=="remove", SUBSYSTEMS=="usb", KERNEL=="sd*", ENV{ID_FS_TYPE}!="", RUN+="/bin/rmdir /media/usb%k", OPTIONS="last_rule"

Update 3: I added /srv to the list of directories to be ignored by the backup script, as it contains data.

Update 4: Now /etc/xinet.d/ssh is split between ssh_intranet and ssh_internet, the later being generated by a script in /etc/dhcp/dhclient-exit-hooks.d/. This avoids us to hardcode IPs by hand. Still, it implies hardcoding IP in conffiles, so it must be kept in mind when doing major software upgrade that may imply conffile syntax change, etc.

Update 5: I noticed auto eth0 was missing in /etc/network/interfaces. I added it (and maybe Update 1 was related to that).

Update 6: I added sample firewall rules for minidlna.

Update 7: In case you have no static IP from your ISP, you may want to create a free account on no-ip and install a client:

apt-get install ddclient

And configure /etc/ddclient.conf

invoke-rc.d ddclient restart

Update 8: Now debian packages are provided, notably for torrents over SAMBA management.

Update 9: Use usual listen statement in /etc/nginx/site-available/default

Update 10: Deactivate Samba backward compatibility filename mangling

Reminder, needs to be changed checked whenever the server is relocated:

(obviously you should not use any sample password provided in this page)

We avoided hardcoding IPs but it was not always possible. Yes we did. However, in case of an ISP/main network change, which usually implies IP changes, make sure the following are properly updated by the dhclient:

/etc/bind/named.conf.option: ISP DNS IPs as in /etc/resolv.conf
/etc/xinetd.d/ssh_internet: internet IP as provided by ifconfig

Disclaimer: this whole setup has been made to be maintainable by people that have not much experience in computer system administration – but enough to log in via SSH without being completely lost in limbo. As such, you’ll probably notice I made some tradeoff between security and easiness, for instance by providing in clear text the Wifi passphrase on the web sysadmin page. Anyway I think most important pieces are rock solid and secondary one does not matter much (Wifi is insecure by design, by concept I would even dare to say, using it is itself such an obvious tradeoff).

(this is still being tested, I may update this page soon; it’s likely I forgot to mention a few apt-get of perl packages required by the scripts; please mail me if you find any flaws or obvious issues with what is proposed here)

Minimalistic BitTorrent client over NFS/Samba: with Transmission 2.x

I previously released a script to use transmission (BitTorrent client) over NFS/Samba.

This script was written for transmission 1.x. I updated to use transmission 2.x. It’s a hack more than anything else, it’s just a wrapper for transmission-remote, the official RPC client.

It works as before. You put $file.torrent in a watchdir, the script runs (cronjob) and create $file.trs (containing infos about the download) and starts the download. Rename it $file.trs- to pause it, remove the $file.trs to stop it. When the download is finished, you get a mail (if cron is properly set up).

Due to progress made by transmission devs, the install process is even simpler.

1) Set up watch and download dirs as before.

2) Install/Upgrade to transmission 2.x (packages cli and daemon).

3) [this make senses only if you used the previous version] Debian now starts transmission with the debian-transmission user. Trying to keep using torrent cause the init script to fail and, in the long run it’s anyway best to use the user debian maintainers provides. To easily switch to this new user, I removed the new debian-transmission entries in /etc/passwd and /etc/group and then replaced torrent by debian-transmission (except /home path obviously) in both these files (also updated /etc/cron.d/torrent/). Finally I ran chown debian-transmission.debian-transmission /var/lib/transmission-daemon /etc/transmission-daemon/settings.json.

4) Update transmission-daemon config. Make sure the daemon is down before, otherwise your changes won’t stay. So edit /etc/transmission-daemon/settings.json. I changed:

"blocklist-enabled": true,
"download-dir": "/home/torrent/download",
"message-level": 0,
"peer-port-random-on-start": true,
"port-forwarding-enabled": true,
"rpc-authentication-required": false,

5) Install the script, test it:

cd /usr/local/bin
chmod a+x
su debian-transmission
cat status

6) Set up cronjob and log rotation:

* * * * * debian-transmission cd ~/watch && /usr/local/bin/


/home/torrent/watch/log {
rotate 2

Then you should be fine 🙂

Minimalistic BitTorrent client over NFS/Samba

Not quite AJAX

While current trends in music/movie industry will surely encourage development of a new generation of peer-to-peer softwares, the same way they made CD-burners cheap in a less than a decade, I’m still quite happy with BitTorrent.

I used torrentflux for quite some time. Shipped with Debian, installed on my local home server, accessible to any box on the network over https, even if it’s interface is not exactly eye candy, it works. I just had to configure web browsers to access http://server/torrentflux/index.php?url_upload=$ each time they hit a .torrent file. But even if web interface may be powerful, user-friendly, I resent torrentflux for having me to click plenty of time (at least two times just to start a download), after login in.

I took a look at rTorrent. It works by looking into a directory for new .torrent then load them automatically. Wonderful. Sadly, you have to log in over SSH and then manually select over a text user interface which download you want to actually start.

I liked the idea of dragging’n’dropping .torrent in one directory. It can be done over NFS or Samba, with no additional login. I have those already set up on my server. Next step is to handle queue management with the same directory.

I came up with the idea of using a command line BitTorrent client through a script that would watch the damn NFS/Samba directory. It would :
– notice and register new .torrents dropped
– allow to forcefully pause/remove any designated torrent
– allow to forcefully pause all downloads
– warn by mail whenever a download is completed and unregister the relevant torrent

So I wrote such script so it would handle transmission daemon as shipped by debian stable. It looks for file in a given directory named after the following syntax:
– $file.torrent = torrent to be added
– $realfile.hash = torrent being processed (delete it to remove the torrent)
– $realfile.hash- = torrent paused
– $realfile.hash+ = torrent (supposedly) completed and already removed
– all- = pause all

Here’s the HOWTO:

apt-get install tranmissioncli screen
adduser torrent
echo "torrent: youruser" >> /etc/aliases

su torrent
cd ~/
mkdir watch download

mkdir -p /server
ln -s /home/torrent /server

Obtain uid/gid of torrent necessary below:

cat /etc/passwd | grep torrent

Here I get 1003/1003.

Edit /etc/exports to set up NFS access (this assumes your NFS server is already set up), add:

# every box on the network get rw access to rtorrent

On each NFS client, add in /etc/fstab (you must create mount points):

server:/home/torrent/download /mnt/torrent/download nfs rw,nolock 0 0
server:/home/torrent/watch /mnt/torrent/watch nfs rw,nolock 0 0

Edit /etc/samba/smb.conf to set up Samba access (this assumes your Samba server is already set up, add:

path = /home/torrent/download
browseable = yes
public = yes
valid users = youruser
force user = torrent
force group = torrent
writable = yes

path = /home/torrent/watch
browseable = yes
public = yes
valid users = youruser
force user = torrent
force group = torrent
writable = yes

Restart NFS/Samba servers, mount networked file system on the clients.

Add a startup script for transmission-daemon, edit it if need be (daemon configuration is done here), fire it up:

cd /etc/init.d/
update-rc.d torrent defaults 80
/etc/init.d/torrent start

At any time, you can check the current daemon process with screen:

screen -r torrent

Add in /usr/local/bin or /usr/bin (anywhere in $PATH):

cd /usr/local/bin
chmod a+x

Check that it runs properly. Drag’n’drop any .torrent in /home/torrent/watch and run:

su torrent
cat status

If everything is ok, add in /etc/cron.d/torrent:

* * * * * torrent cd ~/watch && /usr/local/bin/

And /etc/logrotate.d/torrent:

/home/torrent/watch/log {
rotate 2

You’re done.