Choosing between syslog-ng vs rsyslog depending on the default logs setup

I do not remember why I started using rsyslog instead of syslog-ng. Syslog-ng people provides a comparison page while rsyslog’s point out how their software is rocket-fast. Why changing at all?

From Debian changelog:

rsyslog (8.2210.0-3) unstable; urgency=medium

  * Stop splitting up mail.*
    This avoids having mail related messages duplicated in mail.log and
    mail.{info,warn,err}. (Closes: #508376)
  * Drop catch-all log files /var/log/{messages,debug}
    Avoid unnecessary duplication as those log messages end up in
    /var/log/syslog anyway. (Closes: #580552)
  * Stop splitting lpr facility into its own log file.
    The default printing system CUPS is not using this facility so its
    basically unused nowadays.
  * Stop splitting daemon facility into its own log file.
    The daemon facility is too vaguely defined to be really useful and since
    those log messages end up in /var/log/syslog anyway, stop duplicating
    them.
  * Split cron facility into its own log file /var/log/cron.log.
    The cron facility is widely used and limited enough in scope to have it
    split out separately. (Closes: #625483)
  [...]
 -- Michael Biebl <biebl@debian.org>  Sat, 29 Oct 2022 22:54:41 +0200

All the convenient log splitting there for decades were just deconfigured. No matter if people had fail2ban or similar setup based on them and find a specific interest of parsing rather smaller than bigger logs.

A bug report about how mail.info and mail.log were identifical led maintainer Michael Biebl to explain:

I copied the old sysklogd syslog.conf almost verbatim, to be as compatible as possible and not break any existing setups (like log checkers and stuff, which might look for those files.) So, for lenny, I definitely don’t want to change that. But it is something that could be looked into early during the squeeze cycle. Such a change though should probably be discussed on debian-devel first.

Was it discussed on debian-devel in the end? cron managed to keep its log for now. But a report initially complaining that messages of a certain severity seem discarded by default transformed into a request to remove duplicates. Maintainer Michael Biebl wrote:

I never understood why we had those separate files, and the current rsyslog.conf is basically a result of keeping what the old sysklogd shipped. I need to bring that up on the mailing list, guess.

So I guess it was discussed on the mailing-list, in the end. But was it discussed only for rsyslog and not syslog-ng? In any case, switching to syslog-ng prevents me from bothering changing my setup and allows me to check split logs still easier to read and parse than massive catch-all /var/log/syslog.

Replace PowerDNS by Knot DNS and Knot Resolver+supervisor with DNSSEC, DNS over TLS and domain name spoofing

I was considering using Knot DNS since a while. Switching to DNS over TLS for the resolver queries was the push needed. Turns out that transposing my setup with Knot DNS is very easy and fast.

Knot Resolver does not provide init scripts and suggests to use supervisor as an alternative to systemd omnipresent features. Wary with this idea, turns out that supervisor is very easy to put in place and I might use it more, replace some xinetd, in the future.

For the record, my setup is as follow: there is a local DNS server to serve the local area network HERE.ici domain and is a resolver that cache requests. All requests are sent to the resolver and this one, if he cannot answer, then ask the relevant DNS server. Nothing too fancy, even if sometimes LAN are set up the other way around, where people query the local DNS server by default and this one query the local resolver if he can’t answer.

Install require the following:

apt install knot/testing
apt install knot-resolver supervisor

DNS for the local area network

The Knot DNS server will not be queried directly but by the Knot Resolver and DHCPd. Edit /etc/knot/knot.conf by adding:

server:
    # meant to be called only on loopback
    # by knot-resolver and dhcpd on update
    listen: 127.0.1.1@53

acl:
  - id: update_acl
    # restrict by IP is enough, no need for a ddns key stored on the same host
    address: 127.0.0.1
    action: update

zone:
  - domain: HERE.ici
    dnssec-signing: on
    acl: update_acl

  - domain: 10.in-addr.arpa
    dnssec-signing: on
    acl: update_acl

Create the zones (edit serverhostname and HERE.ici according to your setup):

invoke-rc.d knot restart

knotc zone-begin HERE.ici
knotc zone-set HERE.ici @ 7200 SOA serverhostname hostmaster 1 86400 900 691200 3600
knotc zone-set HERE.ici serverhostname 3600 A 10.10.10.1
knotc zone-set HERE.ici @ 3600 NS serverhostname
knotc zone-set HERE.ici @ 3600 MX 10 mx.HERE.ici
knotc zone-set HERE.ici jeden 3600 CNAME serverhostname
knotc zone-commit HERE.ici

knotc zone-begin 10.in-addr.arpa
knotc zone-set 10.in-addr.arpa @ 7200 SOA serverhostname.HERE.ici. hostmaster.HERE.ici. 1 86400 900 691
200 3600
knotc zone-set 10.in-addr.arpa 10.10.10.1 3600 PTR serverhostname
knotc zone-set 10.in-addr.arpa @ 3600 NS serverhostname.HERE.ici.
knotc zone-commit 10.in-addr.arpa

Zone will to be updated by the DHCP server, in this case ISC dhcpd. Edit /etc/dhcp/dhcpd.conf accordingly:

# dynamic update
ddns-updates on;
ddns-update-style standard;
ignore client-updates; # restrict to domain name

# option definitions common to all supported networks...
option domain-name "HERE.ici";
option domain-search "HERE.ici";
# you can add other extra name servers if you consider acceptable 
# direct external queries in case the resolver is dead
option domain-name-servers 10.0.0.1;
option routers 10.0.0.1;
default-lease-time 600;
max-lease-time 6000;
update-static-leases on;
authoritative;

 [...]

zone HERE.ici. {
  primary 127.0.1.1;
}
zone 10.in-addr.arpa. {
  primary 127.0.1.1;
}

No dynamic update keys, everything goes through the loopback. You might want erase DHCPd leases (usually in /var/lib/dhcp/) so it does not get confused.

DNS Resolver

The Knot Resolver will handle all clients queries, contacting Internet DNS over TLS if need be and caching results. Edit /etc/knot-resolver/kresd.conf to contain:

-- Network interface configuration
-- (knot dns should be using 127.0.1.1)
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
net.listen('10.0.0.1', 53, { kind = 'dns' })
net.listen('10.0.0.1', 853, { kind = 'tls' })

-- drop privileges (check /var/lib/knot-resolves modes/owner)
user('knot-resolver', 'knot-resolver')

-- Load useful modules
modules = {
   'hints > iterate',  -- Load /etc/hosts and allow custom root hints
   'stats',            -- Track internal statistics
   'predict',          -- Prefetch expiring/frequent records
   'view', 	       -- require to limit access
}

-- Cache size
cache.size = 500 * MB

-- whitelist queries identified by subnet
view:addr('127.0.0.0/24', policy.all(policy.PASS))
view:addr('10.0.0.0/24', policy.all(policy.PASS))
-- drop everything that hasn't matched
view:addr('0.0.0.0/0', policy.all(policy.DROP))

-- Custom hints: local spoofed address and antispam/ads
hints.add_hosts("/etc/knot-resolver/redirect-spoof")
hints.add_hosts("/etc/knot-resolver/redirect-ads")

-- internal domain: use knot dns listening on loopback
internalDomains = policy.todnames({'HERE.ici', '10.in-addr.arpa'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.0.1.1@53'}), internalDomains))

-- forward in TLS					
policy.add(policy.all(policy.TLS_FORWARD(
			 {'208.67.222.222', hostname='dns.opendns.com'},
			 {'208.67.220.220', hostname='dns.opendns.com'},
 			 {'1.1.1.1', hostname='cloudflare-dns.com'},
			 {'1.0.0.1', hostname='cloudflare-dns.com'},
})))

redirect-spoof and redirect-ads at /etc/hosts format: it allows domain spoofing or ads domains filtering. It replaces conveniently the extra lua script that my setup was using with PowerDNS.

Update Feb 19 2023: Check recent files on gitlab, I know use RPZ instead of hints/hosts file to block hostile domains. No real change in principle but knot-resolver seems to handle better very long lists in this form.

Finally, the resolver need to be started by the supervisord, with a /etc/supervisor/conf.d/knot-resolver.conf as such:

[program:knot-resolver]
command=/usr/sbin/kresd -c /etc/knot-resolver/kresd.conf --noninteractive
priority=0
autostart=true
autorestart=true
stdout_syslog=true
stderr_syslog=true
directory=/var/lib/knot-resolver

[program:knot-resolver-gc]
command=/usr/sbin/kres-cache-gc -c /var/lib/knot-resolver -d 120000
user=knot-resolver
autostart=true
autorestart=true
stdout_syslog=true
stderr_syslog=true
directory=/var/lib/knot-resolver

Restart the supervisor, check logs. Everything should be fine. You can cleanup.

rc-update add supervisor
rc-update add knot
apt --purge remove pdns-*

# check if there is still traffic on DNS port 53 on the public network interface (should be none)
tcpdump -ni eth0 -p port  53
# check if there is trafic on DNS over TLS port 853 (should be whenever there is a query outside of the cache and LAN)
tcpdump -ni eth0 -p port  853

(My default files are my rien-host package; if you have on your network a mail server using DNS blacklist which will inevitably blocked, you might want to install knot-resolver also on this server, in recursive mode)

Banning IP on two iptables chains with fail2ban

If you use LXC containers in IPv4, it is very likely you use NAT with iptables. I found no immediate way to get fail2ban with iptables/ipset to apply ban on both INPUT (for the LXC master) and FORWARD (for the LXC slaves).

In /etc/fail2ban/jail.local

banaction = iptables-ipset-proto6-allports

(proto6 refers to ipset itself)

In /etc/fail2ban/action.d/iptables.local

[Init]
blocktype=DROP
chain=INPUT
chain2=FORWARD

# brute force add a forward rule 
# letting the INPUT as default for the relevant tests
[Definition]
_ipt_add_rules = <_ipt_for_proto-iter>
              { %(_ipt_check_rule)s >/dev/null 2>&1; } || { <iptables> -I <chain> %(_ipt_chain_rule)s; <iptables> -I <chain2> %(_ipt_chain_rule)s; }
              <_ipt_for_proto-done>

_ipt_del_rules = <_ipt_for_proto-iter>
              <iptables> -D <chain> %(_ipt_chain_rule)s
              <iptables> -D <chain2> %(_ipt_chain_rule)s
              <_ipt_for_proto-done>

After restarting fail2ban, you should find the relevant rules running:

fail2ban-client stop ; fail2ban-client start
iptables-save  | grep match
-A INPUT -p tcp -m set --match-set f2b-ssh src -j DROP
-A INPUT -p tcp -m set --match-set f2b-saslauthd src -j DROP
-A INPUT -p tcp -m set --match-set f2b-banstring src -j DROP
-A INPUT -p tcp -m set --match-set f2b-xinetd-fail src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-ssh src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-saslauthd src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-banstring src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-xinetd-fail src -j DROP

I’d like to act on PREROUTING level but found no faster way to do it. I’d welcome any suggestion to get to this result with less changes to default fail2ban setup.

Switching from SpamAssassin+Bogofilter+Exim to Rspamd+Exim+Dovecot

For more than ten years, I used SpamAssassin and Bogofilter along with Exim to filter spams, along with SFP and Greylisting directly within Exim.

Why changing?

I must say that almost no spam reached me unflagged for years. Why changing anything then?

First, I have more users and the system was not really multiuser-aware. For instance, the bayesian filter training cronjob had configured SPAMDIR, etc.

Second, my whole setup was based on using specific transports and routers in exim to send mails first to bogofilter, then to spamassassin. It means that filtering is done after SMTP-time, when the mail has been already accepted. You filter but do not discourage or block spam sources.

Rspamd?

General
Written inC/LuaPerlC
Process modelevent drivenpre-forked poolLDA and pre-forked
MTA integrationmilter, LDA, custommilter, custom (Amavis)LDA
Web interfaceembedded3rd party
Languages supportfull, UTF-8 conversion/normalisation, lemmatizationnaïve (ASCII lowercase)naïve
Scripting supportLua APIPerl plugins
LicenceApache 2Apache 2GPL
Development statusvery activeactiveabandoned

Rspamd seems activitely developed and easy to integrate not only with Exim, the SMTP, but also with Dovecot, which is use as IMAPS server.

Instead of having:

Exim SMTP accept with greylist -> bogofilter -> spamassassin -> procmail -> dovecot 

The idea is to have:

Exim SMTP accept with greylist and rspamd -> dovecot with sieve filtering 

It blocks rejects/discard spam earlier and makes filtering easier in a multiuser environment (sieve is not dangerous, unlike procmail, and can be managed by clients, if desirable)

My new setup is contained in my rien-mx package: the initial greylist system is still there.

Exim

What matters most is acl_check_rcpt definition (already used in previous version) and new acl_check_data definition.:

### acl/41_rien-check_data_spam
#################################
# based on https://rspamd.com/doc/integration.html
# -  using CHECK_DATA_LOCAL_ACL_FILE included in the acl_check_data instead a creating a new acl
# - and scan all the messages no matter the source:
#    because some might be forwarded by smarthost client, requiring scanning with no defer/reject

## process earlier scan

# find out if a (positive) spam level is already set
warn
  condition = ${if match{$h_X-Spam-Level:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam-Level:
warn
  condition = ${if match{$h_X-Spam-Bar:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam-Bar:
warn
  condition = ${if match{$h_X-Spam_Bar:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam_Bar:

# discard high probability spam identified by earlier scanner
# (probably forwarded by a friendly server, since it is unlikely that a spam source would shoot
# itself in the foot, no point to generate bounces)
discard
  condition = ${if >={${strlen:$acl_m_spamlevel}}{15}}
  log_message = discard as high-probability spam announced

# at least make sure X-Spam-Status is set if relevant
warn
  condition = ${if and{{ !def:h_X-Spam-Status:}{ >={${strlen:$acl_m_spamlevel}}{6} }}}
  add_header = X-Spam-Status: Yes, earlier scan ($acl_m_spamlevel)

# accept content from relayed hosts with no spam check
# unless registered in final_from_hosts (they are outside the local network)
accept
  hosts = +relay_from_hosts
  !hosts = ${if exists{CONFDIR/final_from_hosts}\
		      {CONFDIR/final_from_hosts}\
		      {}}

# rename earlier reports and score
warn
  condition = ${if def:h_X-Spam-Report:}
  add_header = X-Spam-Report-Earlier: $h_X-Spam-Report:
warn
  condition = ${if def:h_X-Spam_Report:}
  add_header = X-Spam-Report-Earlier: $h_X-Spam_Report:
warn
  condition = ${if def:h_X-Spam-Score:}
  add_header = X-Spam-Score-Earlier: $h_X-Spam-Score:
warn
  condition = ${if def:h_X-Spam_Score:}
  add_header = X-Spam-Score-Earlier: $h_X-Spam_Score:


# scan the message with rspamd
warn spam = nobody:true
# This will set variables as follows:
# $spam_action is the action recommended by rspamd
# $spam_score is the message score (we unlikely need it)
# $spam_score_int is spam score multiplied by 10
# $spam_report lists symbols matched & protocol messages
# $spam_bar is a visual indicator of spam/ham level

# remove foreign headers except spam-status, because it better to have twice than none 
warn
  remove_header = x-spam-bar : x-spam_bar : x-spam-score : x-spam_score : x-spam-report : x-spam_report : x-spam_score_int : x-spam_action : x-spam-level
  
# add spam-score and spam-report header
# (possible to add condition to add header rspamd recommend:
#   condition  = ${if eq{$spam_action}{add header})
warn
  add_header = X-Spam-Score: $spam_score
  add_header = X-Spam-Report: $spam_report

# add x-spam-status header if message is not ham
# do not match when $spam_action is empty (e.g. when rspamd is not running)
warn
  ! condition  = ${if match{$spam_action}{^no action\$|^greylist\$|^\$}}
  add_header = X-Spam-Status: Yes

# add x-spam-bar header if score is positive
warn
  condition = ${if >{$spam_score_int}{0}}
  add_header = X-Spam-Bar: $spam_bar

## delay/discard/deny depending on the scan
  
# use greylisting with rspamd
# (unless coming from authenticated or relayed host)
defer message    = Please try again later
   condition  = ${if eq{$spam_action}{soft reject}}
   !hosts = ${if exists{CONFDIR/final_from_hosts}\
		       {CONFDIR/final_from_hosts}\
		       {}}
   !authenticated = *
   log_message  = greylist $sender_host_address according to soft reject spam filtering

# high probability spam get silently discarded if 
# coming from authenticated or relayed host
discard
   condition  = ${if eq{$spam_action}{reject}}
   hosts = ${if exists{CONFDIR/final_from_hosts}\
		       {CONFDIR/final_from_hosts}\
		       {}}
   log_message  = discard as high-probability spam from final from host

discard
   condition  = ${if eq{$spam_action}{reject}}
   authenticated = *
   log_message  = discard as high-probability spam from authentificated
   
# refuse high probability spam from other sources
deny  message    = Message discarded as high-probability spam
   condition  = ${if eq{$spam_action}{reject}}
   log_message	= reject mail from $sender_host_address as high-probability spam

These two will take to send through rspamd and accept/reject/discard mails.

A dovecot_lmtp transport is also necessary:

dovecot_lmtp:   
  debug_print = "T: dovecot_lmtp for $local_part@$domain"   
  driver = lmtp   
  socket = /var/run/dovecot/lmtp   
  #maximum number of deliveries per batch, default 1   
  batch_max = 200   
  # remove suffixes/prefixes   
  rcpt_include_affixes = false 

There are also other internal files, especially in conf.d/main. For instance. If you want to follow my setup, you are encouraged to download the whole mx/etc/exim folder at least. Most files have comments, easy to find out if they are relevant or not. Or you can just copy/paste relevant settings into etc/conf.d/main/10_localsettings, like for instance:

# path of rspamd 
spamd_address = 127.0.0.1 11333 variant=rspamd 

# data acl definition 
CHECK_DATA_LOCAL_ACL_FILE =  /etc/exim4/conf.d/acl/41_rien-check_data_spam

# memcache traditional greylioting
GREY_MINUTES  = 0.4
GREY_TTL_DAYS = 25
# we greylist servers, so we keep it to the minimum required to cross-check with SPF
#   sender IP, sender domain
GREYLIST_ARGS = {${quote:$sender_host_address}}{${quote:$sender_address_domain}}{GREY_MINUTES}{GREY_TTL_DAYS}

Other files exim4/conf.d/ are useful for other local features a bit outside the scope of this article (business per target email aliases, specific handling of friendly relays, SMTP forward to specific authenticated SMPT for specific domains when sending mails).

Dovecot

This assumes that dovecot already works (with all components installed). Nonetheless, you need to edit LTMP delivery by editing /etc/dovecot/conf.d/20-lmtp.conf as follow:

# to be added
lmtp_proxy = no
lmtp_save_to_detail_mailbox = no
lmtp_rcpt_check_quota = no
lmtp_add_received_header = no 

protocol lmtp {
  # Space separated list of plugins to load (default is global mail_plugins).
  mail_plugins = $mail_plugins
  # remove domain from user name
  auth_username_format = %n
}

You also need to edit /etc/dovecot/conf.d/90-sieve.conf:

# to be added

 # editheader is restricted to admin global sieve
 sieve_global_extensions = +editheader

 # run global sieve (sievec must ran manually every time they are updated)
 sieve_before = /etc/dovecot/sieve.d/

You also need to edit /etc/dovecot/conf.d/20-imap.conf:

protocol imap {   
  mail_plugins = $mail_plugins imap_sieve
}

You also need to edit /etc/dovecot/conf.d/90-plugin.conf:

plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms
  sieve_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment

  imapsieve_mailbox4_name = Spam
  imapsieve_mailbox4_causes = COPY APPEND
  imapsieve_mailbox4_before = file:/usr/local/lib/dovecot/report-spam.sieve

  imapsieve_mailbox5_name = *
  imapsieve_mailbox5_from = Spam
  imapsieve_mailbox5_causes = COPY
  imapsieve_mailbox5_before = file:/usr/local/lib/dovecot/report-ham.sieve

  imapsieve_mailbox3_name = Inbox
  imapsieve_mailbox3_causes = APPEND
  imapsieve_mailbox3_before = file:/usr/local/lib/dovecot/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/
}

You need custom scripts to train dovecot: both shell and sieve filters. /usr/local/lib/dovecot/report-ham.sieve:

require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
  set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
  stop;
}

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-ham.sh" [ "${username}" ];

/usr/local/lib/dovecot/report-spam.sieve:

require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-spam.sh" [ "${username}" ];

/usr/local/lib/dovecot/sa-learn-ham.sh

#!/bin/sh
exec /usr/bin/rspamc learn_ham

/usr/local/lib/dovecot/sa-learn-spam.sh

#!/bin/sh 
exec /usr/bin/rspamc learn_spam

Then you need a /etc/dovecot/sieve.d similar as mine to put all site-wide sieve scripts. Mine are shown as example of what can be done easily with sieve. Regarding spam, they will only flag spam. End user sieve filter will matter:

#; -*-sieve-*-
require ["editheader", "regex", "imap4flags", "vnd.dovecot.pipe", "copy"];

# simple flagging for easy per-user sorting
# chained, so only a single X-Sieve-Mark is possible

## flag Spam
if anyof (
	  header :regex "X-Spam-Status" "^Yes",
	  header :regex "X-Spam-Flag" "^YES",
	  header :regex "X-Bogosity" "^Spam",
	  header :regex "X-Spam_action" "^reject")
{
  # flag for the mail client
  addflag "Junk";
  # header for further parsing
  addheader "X-Sieve-Mark" "Spam";
  # autolearn
  pipe :copy "sa-learn-spam.sh";
}
## sysadmin
elsif address :localpart ["from", "sender"] ["root", "netdata", "mailer-daemon"]
{
  addheader "X-Sieve-Mark" "Sysadmin";
} 
## social network
elsif address :domain :regex ["to", "from", "cc"] ["^twitter\.",
						   "^facebook\.",
						   "^youtube\.",
						   "^mastodon\.",
						   "instagram\."]
{
  addheader "X-Sieve-Mark" "SocialNetwork";
}
## computer related
elsif address :domain :regex ["to", "from", "cc"] ["debian\.",
						   "devuan\.",
						   "gnu\.",
						   "gitlab\.",
						   "github\."]
{
  addheader "X-Sieve-Mark" "Cpu";
}

Each time scripts are modified in this folder, sievec must be run by root (because otherwise sieve script are compiled by current user, which cannot write in /etc for obvious reasons):

sievec -D /etc/dovecot/sieve.d
sievec -D /usr/local/lib/dovecot

Finally, as example of final user sieve script (to put in ~/.dovecot.sieve:

#; -*-sieve-*-
require ["fileinto", "regex", "vnd.dovecot.pipe", "copy"];

if header :is "X-Sieve-Mark" "Spam"
{
   # no care for undisclosed recipients potential false positive
  if address :contains ["to", "cc", "bcc"] ["undisclosed recipients", "undisclosed-recipients"]
		{
		  discard;
		  stop;
		}

  # otherwise just put in dedicated folder		
  fileinto "Spam";
  stop;
}

Rspamd

Rspamd was installed by devuan/debian package (not clear to me why Rspamd people discourage using these packages on their website, lacking context). It work out of the box.

I also installed clamav, razor and redis. Rspamd require lot of small tuning, check the folder /etc/rspamd

To get razor running, it pass request through /etc/xinetd.d/razor:

service razor
{
#	disable		= yes
	type		= UNLISTED
	socket_type     = stream
	protocol	= tcp
	wait		= no
	user		= _rspamd
	bind            = 127.0.0.1
	only_from	= 127.0.0.1
	port		= 11342
	server		= /usr/local/bin/razord
}

going along with wrapper script /usr/local/bin/razord:

#!/bin/sh
/usr/bin/razor-check && echo -n "spam" || echo -n "ham"

It is configured to work in the same way with pyzor but so far it does not work (not clear to me why – seems also an IPv6 issue, see below).

I noticed issues with IPv6: so far my mail servers are still IPv4 only and Rspamd nonetheless tries sometimes to connect on IPv6. I solved issue by commenting ::1 localhost in /etc/hosts.

Results

So far it works as expected (except the issues IPv4 vs IPv6 and pyzor). Rspamd required a bit more work than expected, but once it is going, it seems good.

Obviously, in the process, I lost the benefit of the well trained Bogofilter, but I hope soon enough Rspamd own bayesian filters will kick in.

In my setup there are extra files related to replicating over multiple servers that I might cover in another article (replication of email, sieve users filter through nextcloud and redis shared database via stunnel). The switch to Rspamd+Exim+Dovecot made this replication of multiples servers much better.

UPDATE: pipe :copy vs execute :pipe

Using pipe :copy in sieve script is actually causing issues. Sieve pipe is a disposition-type action, it is intended to deliver the message, similarly to a fileinto or redirect command. As such, if the command return failure, sieve filter stop. That is not desirable, if we use rspamd with learn_condition (defined in statistic.conf) to avoid multiple learning of the same file, etc. It would lead to such error in logs and sieve scripts prematurely ended:

[dovecot log]
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: program exec:/usr/local/lib/dovecot/sa-learn-spam.sh (31810): Terminated with non-zero exit code 1
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: Error: sieve: failed to execute to program `sa-learn-spam.sh': refer to server log for more information.
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: sieve: msgid=<20220409190707.0A81E808EB@xxxxxxxxxxxxxxxxxxx>: stored mail into mailbox 'INBOX'
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: Error: sieve: Execution of script /etc/dovecot/sieve.d/20_rien-mark.sieve failed, but implicit keep was successful

[rspamd log]
2022-04-09 21:08:10 #4264(controller) <eb2984>; csession; rspamd_stat_classifier_is_skipped: learn condition for classifier bayes returned: already in class spam; probability 93.38%; skip classifier
2022-04-09 21:08:10 #4264(controller) <eb2984>; csession; rspamd_task_process: learn error: all learn conditions denied learning spam in default classifier

We got an implicit keep, with an already known and identified spam forcefully sent to INBOX due to learning failure since it was already known.

Using execute :pipe instead solves the issue and match what we really want: the spam/ham learning process is extra step, it is neither involved in the filtering or delivery of the message. Its failure or success is irrelevant to the delivery process.

Using execute, the non-zero error return code from the executed script will be logged too, but without any other effect, especially not stopping sieve further processing:

[dovecot log]
Apr 10 15:12:09 mx dovecot: lmtp(userx)<3450><iqoWCKnXUmJ6DQAA4k3FvQ>: program exec:/usr/local/lib/dovecot/sa-learn-spam.sh (3451): Terminated with non-zero exit code 1
Apr 10 15:12:09 mx dovecot: lmtp(userx)<3450><iqoWCKnXUmJ6DQAA4k3FvQ>: sieve: msgid=<6252b1ee.1c69fb81.8a4e2.f847@xxxxxxxxxxxx>: fileinto action: stored mail into mailbox 'Spam'

[rspamd log]
2022-04-10 15:12:09 #9025(controller) <820d3e>; csession; rspamd_task_process: learn error: all learn conditions denied learning spam in default classifier

Check dovecot related files for up-to-date example/version:/etc/dovecot /usr/local/lib/dovecot

Checking potential mail server issues with SASL/PAM and Rainloop

I recently encountered the two following issues that exist more or less out of the box.

SASL/PAM: beware of empty passwords

On my mail servers, authenticated SMTP exists for the domains users. It works using sasl2-bin through PAM.

If you happen to have empty password for some unix accounts on this server, like:

thisuser::17197:0:99999:7:::

You would except the lack of password (like the root account can be on LXC container, after creation) to prevent login. It won’t, it will accept a blank password as valid authenticated login.

In /etc/pam.d/common-auth you can find an explanation:

auth [success=1 default=ignore] pam_unix.so nullok

it will effectively make (at least on current Devuan/Debian stable) the server an open relay for thisuser.

Some people reported it already and there are spambots around that specifically use root with blank password (without doing beforehand any other stupid attempt to misuse the server).

Here is a workaround, simply making sure no user got a blank password field in /etc/shadow :

usermod -p '!' thisuser

Rainloop: unmaintained, existing security issues

It you use the webmail Rainloop, be aware that is seems to be no longer maintained, with several attempt by differents people to warn the team about security issues left unanswered.

While we can hope that nothing tragic happening in life of the people behind the software causing this odd silence, the best course of action for now seems to be switching to the Snappymail fork, which includes security fixes and that is actively developed.

Setting up a cheap 5.1 surround sound setup reusing hardware

Initial 4.0 setup

I had some sort of custom 4.0 (4 speaker, 0 subwoofer) surround setup, aka quadraphonic, associated through Sweex 7.1 external USB sound card connected to a laptop running Devuan GNU/Linux – video output going through HDMI to a projector.

This surround was composed of a an old Denon PMA-715R amplifier (65 W per channel 8Ω) coupled with bookshelf loudspeakers Infinity reference 1 mkII (50W 6Ω) for the two front speakers, and an Edifier M3280BT 2.1 sound system (8W per channel + 20W subwoofer) for the rear speakers.

Yes, it meant that the Edifier subwoofer was actually unused and it could have been used to make a 4.1 setup. But it made sense to keep the best hardware for the front.

It also meant that it was connected to the Sweex in 5.1 mode (only designed to do 7.1/5.1/2.1 – not 4.0), using only two plugs (2 rear, 2 front = 4.0) out of the 3 required (front/bass = 1.1, 4.0+1.1 = 5.1). So it meant also using the extra-stereo filter in Smplayer, otherwise sound would sometimes be incomplete, especially with movies with a complex 5.1 flow.

Not perfect but good enough for years.

5.1 setup

The gradual death of the old Denon PMA-715R amplifier changed that.

A full surround setup is expensive and would leave me with unused hardware, especially the Infinity loudspeakers.

I bought two Ruizhi 2x50W + 100W amps for 30 euros each (well, I could have bought the simpler Ruizhi 2x50W for 10 euros each, but I was not entirely decided about the final setup at that moment), 12V DC adapters to go along, and two Edifier P12 20W speakers.

I doubt these Ruizhi can actually do 2x50W proper but if they do only half it will be enough.

The setup is now: front left+right = Infinity 50W + Ruizhi 50W ; rear left+right = Edifier P12 20W + Ruizhi 50W ; center+bass : Edifier M3280BT 8Wx2+20W.

It would be possible to use bluetooth with all these (Ruizhi and M3208BT) but I kept if simple with cables.

Such setup from scratch cost around 30 * 2 (Ruizhi) + 80 (P12) + 60 (M3280BT is no longer sold but similar models can be found between 50 to 90 euros, like Edifier M1370 or XB6BT) + 150 (passive 50/80W loudspeakers can be found from 100 to 200 euros) + cables =~ 400 euros for the whole. It is not entirely sure it is worth it: very cheap 5.1 setups, around 200 euros, would be as good and models around 400 euros might be better.

Reusing existing hardware, as in my case, we are down to 140 euros.

Preventing stored files deletion without assigning to root or using chattr

To prevent accidental or malicious files deletion (for instance a local collection of images, or a collection of movies on a Samba server), one option is to grant the directory to root or use chattr to make these files immutables (which also require root privileges).

That works. But any further modification would then require root privileges.

The proposed approach is, instead, to change ownership of files that reached a certain age (one week, one month or one year) to a dedicated “read-only” user, in a way that usual users can still add new files in the collection directories but no longer remove the old ones to safekeep.

This is not opposed to backups or filesystem snapshots, it is a step to prevent data loss instead of curing it.

Say on your video storage library on a samba server, files added by guest users are forcibly assigned to nobody:smbusers. You would then create a dedicated nobody-ro user and would configure in /etc/read-only-this.conf the library path to be handled by the read-only-this.pl script. Run by a daily cronjob, the read-only-this.pl script would reassign all files older than say one week to nobody-ro:smbusers, with no write group privilege. Directories would get the special T sticky bit so samba guest users would still be able to add new files but not remove old ones.

It would be possible to actually allow nobody-ro to log in through samba, shell or whatever scripts, to enable file removal or directory reorganisation. But the video storage library is protected from mistakes from regular users or malicious deletion by scripts using regular users accounts.

(Note that the read-only-this.pl script cares only for video/image/audio/documents mime types – the mime-type selection might later be added as configuration option)

It is included in the rien-common package.

Bye linuxfr.org

Today, someone posted a message on linuxfr wondering who are the users from 1999 still active on the website (created in 1998 – but first with another account database). 31 apparently. My own was created in December 1999, last post in 2010. After 22 years and 11 years without posting, while still from time to time checking if there might be an article of interest to me, it is safe to assume this account is now purposeless.

(update: maybe the message posted on linuxfr made people think, the 1999 active accounts list is now down to 28)

Converting video files to H.265/HEVC, no washed out colors and all streams mapped, with ffmpeg

I had a few 1080p video files using AV1 codec. Not sure why, if it is a player issue or hardware issue, nonetheless, my (slightly aging) laptop was having a hard time playing these while playing with no effort at all H.265/HEVC 2160p videos. The following command converts all mkv files in a folder to H.265/HEVC, not washing out colors and keep all streams:

for av in *.mkv; do ffmpeg -i "$av" -c:v libx265 -color_trc smpte2084 -color_primaries bt2020 -crf 22 -map 0 -map_metadata 0 -map_chapters 0 -c:a copy -c:s copy "${av%.*}"-x265.mkv ; done

In my case, it results in slightly larger files but, and that was the point, these play on the laptop with no noticeable CPU-usage:

1,2G 'XX1 AV1 Opus [AV1D].mkv'
1,1G 'XX1 AV1 Opus [AV1D]-x265.mkv'
830M 'XX2 AV1 Opus [AV1D].mkv'
951M 'XX2 AV1 Opus [AV1D]-x265.mkv'

Sorting and moving automatically videos from download to storage directory

In the spirit the earlier scripts to clean up ogg/mp3 collection (tags, filenames) with lltag, the following script is a proposal to automatically move, from download directory to storage directory, the video files that deserved to be kept. This is especially useful when both directories are on different physical drives and, as such, take a while, and typically with implies heavy IO usage at the moment you are actually using the computer that host the drivers.

The idea is to put a simple mark on directories or files, in the download area, that should be move to the storage area, assuming storage area contains top directories to sort files. The mark is arbitrary: ##Mark##, Mark matching a top directory within the download area.

For example, say we have in the download area the file Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.avi, associated with a subtitle file Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.en.srt, within a directory named Vabank II. It must go in storage area top directory Action Espionnage.

The way the script sort-download-area.pl works requires only the Vabank II or Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.avi to be renamed to include ##Action Espionnage##. And, obviously, to make it more practical, if can be also ##Action##, ##action##, ##Espionnage##, ##AE##, ##ActionEspionnage##, and others aliases as long as they are not confusing regarding other top directories of the download area.

Then running sort-download-area.pl --download /mnt/download --storage /mnt/storage (assuming these are the relevant directories) will take care of moving the found video and text/subtitles files (based on mime-type and filenames). The old directory will remain, the script won’t take any risk to erase any data by itself.

It can be run with --debug option to make a dry-run, to check if everything is in order, list possible marks, etc. If run as root, it will take care of changing mode and ownership to match the relevant download area top directory.

Once a satisfying setup is in place (assuming the script is in /usr/local/bin), it is enough to add a /etc/cron.daily/sort-download-area like:

#!/bin/sh
/usr/local/bin/sort-download-area.pl  --download /mnt/download --storage /mnt/storage

Here the current version of the sort-download-area.pl (but you are advised to always take the latest gitlab version) :

#!/usr/bin/perl

use strict "vars";
use Fcntl ':flock';
use POSIX qw(strftime);
use File::Find;
use File::Basename;
use File::Path qw(make_path);
use File::Copy qw(move);
use File::MimeInfo;
use File::Slurp qw(read_dir);
use Getopt::Long;
use Term::ANSIColor qw(:constants);

# config:
my $user = "nobody";
my $group = "nobody";
my ($download, $storage);
my $debug = 0;
my ($getopt, $help);

# get standard opts with getopt
eval {
    $getopt = GetOptions("debug" => \$debug,
			 "help" => \$help,
			 "download|d:s" => \$download,
			 "storagedir:s" => \$storage);
};

if ($help) {
    print STDERR <<EOF;
Usage: $0 [OPTIONS]

    -d DIR, --download DIR   (mandatory) path to the download/input area
    -s DIR, --storage DIR    (mandatory) path to the storage/output area

    --debug                  Dry-run debug test


Author: yeupou\@gnu.org
       https://yeupou.wordpress.com/

EOF
    exit(1);    
}

unless ($download and $storage) {
    die "Both --download INDIR and --storage OUTDIR must be provided.\nExiting";
}
unless (-d $download and -d $storage) {
    die "Both $download (--download) and $storage (--storage) must exists.\nExiting";
}

sub debug {
    return unless $debug;
    print $_[1] if $_[1]; 
    print $_[0];
    print RESET if $_[1];
    print "\n";
}

########################################################################
## run

# silently forbid concurrent runs
# (http://perl.plover.com/yak/flock/samples/slide006.html)
open(LOCK, "< $0") or die "Failed to ask lock. Exit";
flock(LOCK, LOCK_EX | LOCK_NB) or exit;


####
#### Find out current possible storage top-dirs
#### (with their respective uid/gid)
# value equal to the real-top dir
my %storage_topdir;
# uid/gid of the real-top dir
my %storage_topdir_uid;
my %storage_topdir_gid;
# keep a list of confusing marks
my %storage_topdir_confusingmark;


debug("\n\nStorage ($storage) top-dirs:\n", ON_CYAN);

for my $dir (read_dir($storage)) {
    next unless -d "$storage/$dir";
    next if ($dir =~ /^\./);   # skip hidden dirs

    # store top dir details
    $storage_topdir{$dir} = $dir;
    $storage_topdir_uid{$dir} = (lstat "$storage/$dir")[4];
    $storage_topdir_gid{$dir} = (lstat "$storage/$dir")[5];    
    debug("\t$storage_topdir{$dir}", GREEN);
    debug("\t($storage_topdir_uid{$dir}:$storage_topdir_gid{$dir})");

    # store also top dir useful aliases (end user might want to use shortcuts)
    # but no checks will be made in case of confusing aliases (ie two top dirs shortened in the name way)
    # for instance, Action Espionnage would also accept:
    #           action espionnage (lowercased)
    #		ActionEspionnage (removal of non-word chars)
    #		actionespionnage (lowercased removal of non-word chars)
    #		AE (only capital letters)
    #           ea (lowercase only capital letters)
    #           Action (single word apart)
    #           action (lowercased single word apart)
    #           Espionnage (single word apart)
    #           espionnage (lowercased single word apart)
    

    # alias as lowercased : WesteRn eq western
    my $alias = lc($dir);
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (lowercased)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tlowercased alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
    }
    # alias with space in place of any non word characters
    $alias = $dir;
    $alias =~ s/[^[:alnum:]]//g;
    
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (removal of non-word chars)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tremoval of non-word chars alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
	
	# same lowercased
	$alias = lc($alias);
	if ($alias ne $dir) {
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (lowercased removal of non-word chars)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tlowercased removal of non-word chars alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	}
    }
    # alternatively, only keep the capitalized letters
    $alias = $dir;
    $alias =~ s/[^[:alnum:]]//g;
    $alias =~ s/[^[:upper:]]//g;
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (only capital letters)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tonly capital letter alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
	# same lowercased     
	$alias = lc($alias);
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (lowercased only capital letter alias)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tlowercased only capital letter alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
    }
    # finally, if several worlds compose a string, try to register each
    # (this is where it is most likely to find confusing aliases)
    if (split(" ", $dir) > 1) {
	foreach my $word (split(" ", $dir)) {
	    $alias = $word;
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (single word apart)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tsingle word apart alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	    $alias = lc($alias);
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (lowercased single word apart)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tlowercased single word apart alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	}
    }
        
}


####
#### Find out any file or directory that we should be moving
#### (do not start moving files unless we checked everything)

# build an hash of files to move
# (with a secondary hash to keep track of the storage topdir) 
my %tomove;
my %tomove_topdir;


debug("\n\nDownload ($download) files:\n", ON_CYAN);

sub wanted {
    # $File::Find::dir is the current directory name,
    # $_ is the current filename within that directory
    # $File::Find::name is the complete pathname to the file.

    # check if we have a ##STRING## inside
    my $mark;
    $mark = $1 if $File::Find::name =~ m/##(.*)##/;

    # none found, skipping
    next unless $mark;

    # string refers to non-existant directory, skipping
    unless ($storage_topdir{$mark}) {
	debug("Mark $mark found for $File::Find::name while no such storage directory exists in $storage", ON_RED);
	# this is an issue that requires manual handling, print ont STDERR
	print STDERR ("Mark $mark found for $File::Find::name while no such storage directory exists in $storage\n");
	next;
    }

    # take into account only videos and text files
    my $suffix;
    $suffix = $1 if $_ =~ /([^\.]*)$/;
    my ($mime_type,$mime_type2) = split("/", mimetype($File::Find::name));
    if ($mime_type ne "video" and
	$mime_type ne "text") {
	# second pass to allow even more text files based on extension
	# (subtitles : srt sub ssa ass idx txt smi)
		
	unless ($suffix eq "srt" or
		$suffix eq "sub" or
		$suffix eq "txt" or
		$suffix eq "ssa" or
		$suffix eq "ass" or
		$suffix eq "idx" or
		$suffix eq "smi") {
	    debug("\tskip $_ ($mime_type/$mime_type2 type)");
	    next;		
	}
    }

    my $destination_dir = "$storage/$storage_topdir{$mark}";
    my $destination_file = $_;
    $destination_file =~ s/##(.*)##//g;
    $destination_file =~ s/^\s*//;
    $destination_file =~ s/\s*$//;

    # now handle the special S00E00 case of series, like 30 Rock (2006) - S05E16 or 30 Rock S05E16
    my ($season, $before_season, $show);
    $before_season = $1 and $season = $2 if $_ =~ m/^(.*)S(\d\d)\ ?E\d\d[^\d]/i;
    if ($season) {
	# there is a season, we must determine the show name
	#    30 Rock (2006) - S05E16 => 30 Rock
	# end user must pay attention to have consistent names
	$show = $1 if $before_season  =~ m/^([\w|\s|\.|\'|\,]*)/g;
	# dots often are used in place of white spaces
	$show =~ s/\./ /g;    
	# keep only spaces in shows name, nothing else
	$show =~ s/[^[:alnum:]|\ ]//g;    
	$show =~ s/^\s*//;
	$show =~ s/\s*$//;
	# capitalize first letter
	$show =~ s/\b(\w)/\U$1/g;
	
	# if we managed to find the show name, then set up the specific series tree
	last unless $show;
	debug("found show: $show", MAGENTA);
	$destination_dir = "$storage/$storage_topdir{$mark}/$show/S$season";	    	
    }

    
    # if we reach this point, everything seems in order, plan the move
    debug("plan -> $destination_dir/$destination_file");
    $tomove{$File::Find::name} = "$destination_dir/$destination_file";
    $tomove_topdir{$File::Find::name} = $storage_topdir{$mark};


    # additionally, if we deal with a video, look for any possibly related file to add also that would not have been picked
    # otherwise
    if ($mime_type eq "video") {
	my $other_files_path = $File::Find::name;
	$other_files_path =~ s/\.$suffix$//g;

	debug("glob $other_files_path*");
	my @other_files =
	    glob('$other_files_path*.srt'),
	    glob('$other_files_path*.sub'),
	    glob('$other_files_path*.txt'),
	    glob('$other_files_path*.ssa'),
	    glob('$other_files_path*.ass'),
	    glob('$other_files_path*.idx'),
	    glob('$other_files_path*.smi');
	foreach my $file (@other_files) {
	    debug("plan -> $destination_dir/$file");
	    $tomove{"$File::Find::dir/$file"} = "$destination_dir/$file";
	    $tomove_topdir{"$File::Find::name/$file"} = $storage_topdir{$mark};
	}		    
    }

    debug();
      
}
find(\&wanted, $download);

####
#### Actually move files now
####

debug("\n\nMove from download ($download) to storage ($storage):\n", ON_CYAN);

foreach my $file (sort keys %tomove) {

    debug(basename($file), YELLOW);

    my $uid = $storage_topdir_uid{$tomove_topdir{$file}};
    my $gid = $storage_topdir_gid{$tomove_topdir{$file}};    

    # create directory if needed
    my $dir = dirname($tomove{$file});
    unless (-e $dir) {
	make_path($dir, { chmod => 0770, user => $uid, group => $gid }) unless $debug;
	debug("make_path $dir (chmod => 0770, user => $uid, group => $gid)");
    }

    # then move the file (chown if root)
    # avoid overwriting, add number in the end, no extension saving
    my $copies;
    if (-e $tomove{$file}) {
	while (-e "$tomove{$file}.$copies") {
	    $copies++;
	    # stop at 10, makes no sense to keep more than that amount of copies
	    last if $copies > 9;
	}
    }    
    $tomove{$file} .= ".$copies" if $copies;
    move($file, $tomove{$file}) unless $debug;
    chown($uid, $gid, $tomove{$file}) unless $debug or $< ne 0;
    debug("$file -> $tomove{$file}");

    debug();	  
}

# EOF