Replace PowerDNS by Knot DNS and Knot Resolver+supervisor with DNSSEC, DNS over TLS and domain name spoofing

I was considering using Knot DNS since a while. Switching to DNS over TLS for the resolver queries was the push needed. Turns out that transposing my setup with Knot DNS is very easy and fast.

Knot Resolver does not provide init scripts and suggests to use supervisor as an alternative to systemd omnipresent features. Wary with this idea, turns out that supervisor is very easy to put in place and I might use it more, replace some xinetd, in the future.

For the record, my setup is as follow: there is a local DNS server to serve the local area network HERE.ici domain and is a resolver that cache requests. All requests are sent to the resolver and this one, if he cannot answer, then ask the relevant DNS server. Nothing too fancy, even if sometimes LAN are set up the other way around, where people query the local DNS server by default and this one query the local resolver if he can’t answer.

Install require the following:

apt install knot/testing
apt install knot-resolver supervisor

DNS for the local area network

The Knot DNS server will not be queried directly but by the Knot Resolver and DHCPd. Edit /etc/knot/knot.conf by adding:

server:
    # meant to be called only on loopback
    # by knot-resolver and dhcpd on update
    listen: 127.0.1.1@53

acl:
  - id: update_acl
    # restrict by IP is enough, no need for a ddns key stored on the same host
    address: 127.0.0.1
    action: update

zone:
  - domain: HERE.ici
    dnssec-signing: on
    acl: update_acl

  - domain: 10.in-addr.arpa
    dnssec-signing: on
    acl: update_acl

Create the zones (edit serverhostname and HERE.ici according to your setup):

invoke-rc.d knot restart

knotc zone-begin HERE.ici
knotc zone-set HERE.ici @ 7200 SOA serverhostname hostmaster 1 86400 900 691200 3600
knotc zone-set HERE.ici serverhostname 3600 A 10.10.10.1
knotc zone-set HERE.ici @ 3600 NS serverhostname
knotc zone-set HERE.ici @ 3600 MX 10 mx.HERE.ici
knotc zone-set HERE.ici jeden 3600 CNAME serverhostname
knotc zone-commit HERE.ici

knotc zone-begin 10.in-addr.arpa
knotc zone-set 10.in-addr.arpa @ 7200 SOA serverhostname.HERE.ici. hostmaster.HERE.ici. 1 86400 900 691
200 3600
knotc zone-set 10.in-addr.arpa 10.10.10.1 3600 PTR serverhostname
knotc zone-set 10.in-addr.arpa @ 3600 NS serverhostname.HERE.ici.
knotc zone-commit 10.in-addr.arpa

Zone will to be updated by the DHCP server, in this case ISC dhcpd. Edit /etc/dhcp/dhcpd.conf accordingly:

# dynamic update
ddns-updates on;
ddns-update-style standard;
ignore client-updates; # restrict to domain name

# option definitions common to all supported networks...
option domain-name "HERE.ici";
option domain-search "HERE.ici";
# you can add other extra name servers if you consider acceptable 
# direct external queries in case the resolver is dead
option domain-name-servers 10.0.0.1;
option routers 10.0.0.1;
default-lease-time 600;
max-lease-time 6000;
update-static-leases on;
authoritative;

 [...]

zone HERE.ici. {
  primary 127.0.1.1;
}
zone 10.in-addr.arpa. {
  primary 127.0.1.1;
}

No dynamic update keys, everything goes through the loopback. You might want erase DHCPd leases (usually in /var/lib/dhcp/) so it does not get confused.

DNS Resolver

The Knot Resolver will handle all clients queries, contacting Internet DNS over TLS if need be and caching results. Edit /etc/knot-resolver/kresd.conf to contain:

-- Network interface configuration
-- (knot dns should be using 127.0.1.1)
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
net.listen('10.0.0.1', 53, { kind = 'dns' })
net.listen('10.0.0.1', 853, { kind = 'tls' })

-- drop privileges (check /var/lib/knot-resolves modes/owner)
user('knot-resolver', 'knot-resolver')

-- Load useful modules
modules = {
   'hints > iterate',  -- Load /etc/hosts and allow custom root hints
   'stats',            -- Track internal statistics
   'predict',          -- Prefetch expiring/frequent records
   'view', 	       -- require to limit access
}

-- Cache size
cache.size = 500 * MB

-- whitelist queries identified by subnet
view:addr('127.0.0.0/24', policy.all(policy.PASS))
view:addr('10.0.0.0/24', policy.all(policy.PASS))
-- drop everything that hasn't matched
view:addr('0.0.0.0/0', policy.all(policy.DROP))

-- Custom hints: local spoofed address and antispam/ads
hints.add_hosts("/etc/knot-resolver/redirect-spoof")
hints.add_hosts("/etc/knot-resolver/redirect-ads")

-- internal domain: use knot dns listening on loopback
internalDomains = policy.todnames({'HERE.ici', '10.in-addr.arpa'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.0.1.1@53'}), internalDomains))

-- forward in TLS					
policy.add(policy.all(policy.TLS_FORWARD(
			 {'208.67.222.222', hostname='dns.opendns.com'},
			 {'208.67.220.220', hostname='dns.opendns.com'},
 			 {'1.1.1.1', hostname='cloudflare-dns.com'},
			 {'1.0.0.1', hostname='cloudflare-dns.com'},
})))

redirect-spoof and redirect-ads at /etc/hosts format: it allows domain spoofing or ads domains filtering. It replaces conveniently the extra lua script that my setup was using with PowerDNS.

Update Feb 19 2023: Check recent files on gitlab, I know use RPZ instead of hints/hosts file to block hostile domains. No real change in principle but knot-resolver seems to handle better very long lists in this form.

Finally, the resolver need to be started by the supervisord, with a /etc/supervisor/conf.d/knot-resolver.conf as such:

[program:knot-resolver]
command=/usr/sbin/kresd -c /etc/knot-resolver/kresd.conf --noninteractive
priority=0
autostart=true
autorestart=true
stdout_syslog=true
stderr_syslog=true
directory=/var/lib/knot-resolver

[program:knot-resolver-gc]
command=/usr/sbin/kres-cache-gc -c /var/lib/knot-resolver -d 120000
user=knot-resolver
autostart=true
autorestart=true
stdout_syslog=true
stderr_syslog=true
directory=/var/lib/knot-resolver

Restart the supervisor, check logs. Everything should be fine. You can cleanup.

rc-update add supervisor
rc-update add knot
apt --purge remove pdns-*

# check if there is still traffic on DNS port 53 on the public network interface (should be none)
tcpdump -ni eth0 -p port  53
# check if there is trafic on DNS over TLS port 853 (should be whenever there is a query outside of the cache and LAN)
tcpdump -ni eth0 -p port  853

(My default files are my rien-host package; if you have on your network a mail server using DNS blacklist which will inevitably blocked, you might want to install knot-resolver also on this server, in recursive mode)

Banning IP on two iptables chains with fail2ban

If you use LXC containers in IPv4, it is very likely you use NAT with iptables. I found no immediate way to get fail2ban with iptables/ipset to apply ban on both INPUT (for the LXC master) and FORWARD (for the LXC slaves).

In /etc/fail2ban/jail.local

banaction = iptables-ipset-proto6-allports

(proto6 refers to ipset itself)

In /etc/fail2ban/action.d/iptables.local

[Init]
blocktype=DROP
chain=INPUT
chain2=FORWARD

# brute force add a forward rule 
# letting the INPUT as default for the relevant tests
[Definition]
_ipt_add_rules = <_ipt_for_proto-iter>
              { %(_ipt_check_rule)s >/dev/null 2>&1; } || { <iptables> -I <chain> %(_ipt_chain_rule)s; <iptables> -I <chain2> %(_ipt_chain_rule)s; }
              <_ipt_for_proto-done>

_ipt_del_rules = <_ipt_for_proto-iter>
              <iptables> -D <chain> %(_ipt_chain_rule)s
              <iptables> -D <chain2> %(_ipt_chain_rule)s
              <_ipt_for_proto-done>

After restarting fail2ban, you should find the relevant rules running:

fail2ban-client stop ; fail2ban-client start
iptables-save  | grep match
-A INPUT -p tcp -m set --match-set f2b-ssh src -j DROP
-A INPUT -p tcp -m set --match-set f2b-saslauthd src -j DROP
-A INPUT -p tcp -m set --match-set f2b-banstring src -j DROP
-A INPUT -p tcp -m set --match-set f2b-xinetd-fail src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-ssh src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-saslauthd src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-banstring src -j DROP
-A FORWARD -p tcp -m set --match-set f2b-xinetd-fail src -j DROP

I’d like to act on PREROUTING level but found no faster way to do it. I’d welcome any suggestion to get to this result with less changes to default fail2ban setup.

Switching from SpamAssassin+Bogofilter+Exim to Rspamd+Exim+Dovecot

For more than ten years, I used SpamAssassin and Bogofilter along with Exim to filter spams, along with SFP and Greylisting directly within Exim.

Why changing?

I must say that almost no spam reached me unflagged for years. Why changing anything then?

First, I have more users and the system was not really multiuser-aware. For instance, the bayesian filter training cronjob had configured SPAMDIR, etc.

Second, my whole setup was based on using specific transports and routers in exim to send mails first to bogofilter, then to spamassassin. It means that filtering is done after SMTP-time, when the mail has been already accepted. You filter but do not discourage or block spam sources.

Rspamd?

General
Written inC/LuaPerlC
Process modelevent drivenpre-forked poolLDA and pre-forked
MTA integrationmilter, LDA, custommilter, custom (Amavis)LDA
Web interfaceembedded3rd party
Languages supportfull, UTF-8 conversion/normalisation, lemmatizationnaïve (ASCII lowercase)naïve
Scripting supportLua APIPerl plugins
LicenceApache 2Apache 2GPL
Development statusvery activeactiveabandoned

Rspamd seems activitely developed and easy to integrate not only with Exim, the SMTP, but also with Dovecot, which is use as IMAPS server.

Instead of having:

Exim SMTP accept with greylist -> bogofilter -> spamassassin -> procmail -> dovecot 

The idea is to have:

Exim SMTP accept with greylist and rspamd -> dovecot with sieve filtering 

It blocks rejects/discard spam earlier and makes filtering easier in a multiuser environment (sieve is not dangerous, unlike procmail, and can be managed by clients, if desirable)

My new setup is contained in my rien-mx package: the initial greylist system is still there.

Exim

What matters most is acl_check_rcpt definition (already used in previous version) and new acl_check_data definition.:

### acl/41_rien-check_data_spam
#################################
# based on https://rspamd.com/doc/integration.html
# -  using CHECK_DATA_LOCAL_ACL_FILE included in the acl_check_data instead a creating a new acl
# - and scan all the messages no matter the source:
#    because some might be forwarded by smarthost client, requiring scanning with no defer/reject

## process earlier scan

# find out if a (positive) spam level is already set
warn
  condition = ${if match{$h_X-Spam-Level:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam-Level:
warn
  condition = ${if match{$h_X-Spam-Bar:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam-Bar:
warn
  condition = ${if match{$h_X-Spam_Bar:}{\N\*|\+\N}}
  set acl_m_spamlevel = $h_X-Spam_Bar:

# discard high probability spam identified by earlier scanner
# (probably forwarded by a friendly server, since it is unlikely that a spam source would shoot
# itself in the foot, no point to generate bounces)
discard
  condition = ${if >={${strlen:$acl_m_spamlevel}}{15}}
  log_message = discard as high-probability spam announced

# at least make sure X-Spam-Status is set if relevant
warn
  condition = ${if and{{ !def:h_X-Spam-Status:}{ >={${strlen:$acl_m_spamlevel}}{6} }}}
  add_header = X-Spam-Status: Yes, earlier scan ($acl_m_spamlevel)

# accept content from relayed hosts with no spam check
# unless registered in final_from_hosts (they are outside the local network)
accept
  hosts = +relay_from_hosts
  !hosts = ${if exists{CONFDIR/final_from_hosts}\
		      {CONFDIR/final_from_hosts}\
		      {}}

# rename earlier reports and score
warn
  condition = ${if def:h_X-Spam-Report:}
  add_header = X-Spam-Report-Earlier: $h_X-Spam-Report:
warn
  condition = ${if def:h_X-Spam_Report:}
  add_header = X-Spam-Report-Earlier: $h_X-Spam_Report:
warn
  condition = ${if def:h_X-Spam-Score:}
  add_header = X-Spam-Score-Earlier: $h_X-Spam-Score:
warn
  condition = ${if def:h_X-Spam_Score:}
  add_header = X-Spam-Score-Earlier: $h_X-Spam_Score:


# scan the message with rspamd
warn spam = nobody:true
# This will set variables as follows:
# $spam_action is the action recommended by rspamd
# $spam_score is the message score (we unlikely need it)
# $spam_score_int is spam score multiplied by 10
# $spam_report lists symbols matched & protocol messages
# $spam_bar is a visual indicator of spam/ham level

# remove foreign headers except spam-status, because it better to have twice than none 
warn
  remove_header = x-spam-bar : x-spam_bar : x-spam-score : x-spam_score : x-spam-report : x-spam_report : x-spam_score_int : x-spam_action : x-spam-level
  
# add spam-score and spam-report header
# (possible to add condition to add header rspamd recommend:
#   condition  = ${if eq{$spam_action}{add header})
warn
  add_header = X-Spam-Score: $spam_score
  add_header = X-Spam-Report: $spam_report

# add x-spam-status header if message is not ham
# do not match when $spam_action is empty (e.g. when rspamd is not running)
warn
  ! condition  = ${if match{$spam_action}{^no action\$|^greylist\$|^\$}}
  add_header = X-Spam-Status: Yes

# add x-spam-bar header if score is positive
warn
  condition = ${if >{$spam_score_int}{0}}
  add_header = X-Spam-Bar: $spam_bar

## delay/discard/deny depending on the scan
  
# use greylisting with rspamd
# (unless coming from authenticated or relayed host)
defer message    = Please try again later
   condition  = ${if eq{$spam_action}{soft reject}}
   !hosts = ${if exists{CONFDIR/final_from_hosts}\
		       {CONFDIR/final_from_hosts}\
		       {}}
   !authenticated = *
   log_message  = greylist $sender_host_address according to soft reject spam filtering

# high probability spam get silently discarded if 
# coming from authenticated or relayed host
discard
   condition  = ${if eq{$spam_action}{reject}}
   hosts = ${if exists{CONFDIR/final_from_hosts}\
		       {CONFDIR/final_from_hosts}\
		       {}}
   log_message  = discard as high-probability spam from final from host

discard
   condition  = ${if eq{$spam_action}{reject}}
   authenticated = *
   log_message  = discard as high-probability spam from authentificated
   
# refuse high probability spam from other sources
deny  message    = Message discarded as high-probability spam
   condition  = ${if eq{$spam_action}{reject}}
   log_message	= reject mail from $sender_host_address as high-probability spam

These two will take to send through rspamd and accept/reject/discard mails.

A dovecot_lmtp transport is also necessary:

dovecot_lmtp:   
  debug_print = "T: dovecot_lmtp for $local_part@$domain"   
  driver = lmtp   
  socket = /var/run/dovecot/lmtp   
  #maximum number of deliveries per batch, default 1   
  batch_max = 200   
  # remove suffixes/prefixes   
  rcpt_include_affixes = false 

There are also other internal files, especially in conf.d/main. For instance. If you want to follow my setup, you are encouraged to download the whole mx/etc/exim folder at least. Most files have comments, easy to find out if they are relevant or not. Or you can just copy/paste relevant settings into etc/conf.d/main/10_localsettings, like for instance:

# path of rspamd 
spamd_address = 127.0.0.1 11333 variant=rspamd 

# data acl definition 
CHECK_DATA_LOCAL_ACL_FILE =  /etc/exim4/conf.d/acl/41_rien-check_data_spam

# memcache traditional greylioting
GREY_MINUTES  = 0.4
GREY_TTL_DAYS = 25
# we greylist servers, so we keep it to the minimum required to cross-check with SPF
#   sender IP, sender domain
GREYLIST_ARGS = {${quote:$sender_host_address}}{${quote:$sender_address_domain}}{GREY_MINUTES}{GREY_TTL_DAYS}

Other files exim4/conf.d/ are useful for other local features a bit outside the scope of this article (business per target email aliases, specific handling of friendly relays, SMTP forward to specific authenticated SMPT for specific domains when sending mails).

Dovecot

This assumes that dovecot already works (with all components installed). Nonetheless, you need to edit LTMP delivery by editing /etc/dovecot/conf.d/20-lmtp.conf as follow:

# to be added
lmtp_proxy = no
lmtp_save_to_detail_mailbox = no
lmtp_rcpt_check_quota = no
lmtp_add_received_header = no 

protocol lmtp {
  # Space separated list of plugins to load (default is global mail_plugins).
  mail_plugins = $mail_plugins
  # remove domain from user name
  auth_username_format = %n
}

You also need to edit /etc/dovecot/conf.d/90-sieve.conf:

# to be added

 # editheader is restricted to admin global sieve
 sieve_global_extensions = +editheader

 # run global sieve (sievec must ran manually every time they are updated)
 sieve_before = /etc/dovecot/sieve.d/

You also need to edit /etc/dovecot/conf.d/20-imap.conf:

protocol imap {   
  mail_plugins = $mail_plugins imap_sieve
}

You also need to edit /etc/dovecot/conf.d/90-plugin.conf:

plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms
  sieve_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment

  imapsieve_mailbox4_name = Spam
  imapsieve_mailbox4_causes = COPY APPEND
  imapsieve_mailbox4_before = file:/usr/local/lib/dovecot/report-spam.sieve

  imapsieve_mailbox5_name = *
  imapsieve_mailbox5_from = Spam
  imapsieve_mailbox5_causes = COPY
  imapsieve_mailbox5_before = file:/usr/local/lib/dovecot/report-ham.sieve

  imapsieve_mailbox3_name = Inbox
  imapsieve_mailbox3_causes = APPEND
  imapsieve_mailbox3_before = file:/usr/local/lib/dovecot/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/
}

You need custom scripts to train dovecot: both shell and sieve filters. /usr/local/lib/dovecot/report-ham.sieve:

require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
  set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
  stop;
}

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-ham.sh" [ "${username}" ];

/usr/local/lib/dovecot/report-spam.sieve:

require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-spam.sh" [ "${username}" ];

/usr/local/lib/dovecot/sa-learn-ham.sh

#!/bin/sh
exec /usr/bin/rspamc learn_ham

/usr/local/lib/dovecot/sa-learn-spam.sh

#!/bin/sh 
exec /usr/bin/rspamc learn_spam

Then you need a /etc/dovecot/sieve.d similar as mine to put all site-wide sieve scripts. Mine are shown as example of what can be done easily with sieve. Regarding spam, they will only flag spam. End user sieve filter will matter:

#; -*-sieve-*-
require ["editheader", "regex", "imap4flags", "vnd.dovecot.pipe", "copy"];

# simple flagging for easy per-user sorting
# chained, so only a single X-Sieve-Mark is possible

## flag Spam
if anyof (
	  header :regex "X-Spam-Status" "^Yes",
	  header :regex "X-Spam-Flag" "^YES",
	  header :regex "X-Bogosity" "^Spam",
	  header :regex "X-Spam_action" "^reject")
{
  # flag for the mail client
  addflag "Junk";
  # header for further parsing
  addheader "X-Sieve-Mark" "Spam";
  # autolearn
  pipe :copy "sa-learn-spam.sh";
}
## sysadmin
elsif address :localpart ["from", "sender"] ["root", "netdata", "mailer-daemon"]
{
  addheader "X-Sieve-Mark" "Sysadmin";
} 
## social network
elsif address :domain :regex ["to", "from", "cc"] ["^twitter\.",
						   "^facebook\.",
						   "^youtube\.",
						   "^mastodon\.",
						   "instagram\."]
{
  addheader "X-Sieve-Mark" "SocialNetwork";
}
## computer related
elsif address :domain :regex ["to", "from", "cc"] ["debian\.",
						   "devuan\.",
						   "gnu\.",
						   "gitlab\.",
						   "github\."]
{
  addheader "X-Sieve-Mark" "Cpu";
}

Each time scripts are modified in this folder, sievec must be run by root (because otherwise sieve script are compiled by current user, which cannot write in /etc for obvious reasons):

sievec -D /etc/dovecot/sieve.d
sievec -D /usr/local/lib/dovecot

Finally, as example of final user sieve script (to put in ~/.dovecot.sieve:

#; -*-sieve-*-
require ["fileinto", "regex", "vnd.dovecot.pipe", "copy"];

if header :is "X-Sieve-Mark" "Spam"
{
   # no care for undisclosed recipients potential false positive
  if address :contains ["to", "cc", "bcc"] ["undisclosed recipients", "undisclosed-recipients"]
		{
		  discard;
		  stop;
		}

  # otherwise just put in dedicated folder		
  fileinto "Spam";
  stop;
}

Rspamd

Rspamd was installed by devuan/debian package (not clear to me why Rspamd people discourage using these packages on their website, lacking context). It work out of the box.

I also installed clamav, razor and redis. Rspamd require lot of small tuning, check the folder /etc/rspamd

To get razor running, it pass request through /etc/xinetd.d/razor:

service razor
{
#	disable		= yes
	type		= UNLISTED
	socket_type     = stream
	protocol	= tcp
	wait		= no
	user		= _rspamd
	bind            = 127.0.0.1
	only_from	= 127.0.0.1
	port		= 11342
	server		= /usr/local/bin/razord
}

going along with wrapper script /usr/local/bin/razord:

#!/bin/sh
/usr/bin/razor-check && echo -n "spam" || echo -n "ham"

It is configured to work in the same way with pyzor but so far it does not work (not clear to me why – seems also an IPv6 issue, see below).

I noticed issues with IPv6: so far my mail servers are still IPv4 only and Rspamd nonetheless tries sometimes to connect on IPv6. I solved issue by commenting ::1 localhost in /etc/hosts.

Results

So far it works as expected (except the issues IPv4 vs IPv6 and pyzor). Rspamd required a bit more work than expected, but once it is going, it seems good.

Obviously, in the process, I lost the benefit of the well trained Bogofilter, but I hope soon enough Rspamd own bayesian filters will kick in.

In my setup there are extra files related to replicating over multiple servers that I might cover in another article (replication of email, sieve users filter through nextcloud and redis shared database via stunnel). The switch to Rspamd+Exim+Dovecot made this replication of multiples servers much better.

UPDATE: pipe :copy vs execute :pipe

Using pipe :copy in sieve script is actually causing issues. Sieve pipe is a disposition-type action, it is intended to deliver the message, similarly to a fileinto or redirect command. As such, if the command return failure, sieve filter stop. That is not desirable, if we use rspamd with learn_condition (defined in statistic.conf) to avoid multiple learning of the same file, etc. It would lead to such error in logs and sieve scripts prematurely ended:

[dovecot log]
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: program exec:/usr/local/lib/dovecot/sa-learn-spam.sh (31810): Terminated with non-zero exit code 1
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: Error: sieve: failed to execute to program `sa-learn-spam.sh': refer to server log for more information.
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: sieve: msgid=<20220409190707.0A81E808EB@xxxxxxxxxxxxxxxxxxx>: stored mail into mailbox 'INBOX'
Apr  9 21:08:10 mx dovecot: lmtp(userx)<31807><11nYLZrZUWI/fAAA4k3FvQ>: Error: sieve: Execution of script /etc/dovecot/sieve.d/20_rien-mark.sieve failed, but implicit keep was successful

[rspamd log]
2022-04-09 21:08:10 #4264(controller) <eb2984>; csession; rspamd_stat_classifier_is_skipped: learn condition for classifier bayes returned: already in class spam; probability 93.38%; skip classifier
2022-04-09 21:08:10 #4264(controller) <eb2984>; csession; rspamd_task_process: learn error: all learn conditions denied learning spam in default classifier

We got an implicit keep, with an already known and identified spam forcefully sent to INBOX due to learning failure since it was already known.

Using execute :pipe instead solves the issue and match what we really want: the spam/ham learning process is extra step, it is neither involved in the filtering or delivery of the message. Its failure or success is irrelevant to the delivery process.

Using execute, the non-zero error return code from the executed script will be logged too, but without any other effect, especially not stopping sieve further processing:

[dovecot log]
Apr 10 15:12:09 mx dovecot: lmtp(userx)<3450><iqoWCKnXUmJ6DQAA4k3FvQ>: program exec:/usr/local/lib/dovecot/sa-learn-spam.sh (3451): Terminated with non-zero exit code 1
Apr 10 15:12:09 mx dovecot: lmtp(userx)<3450><iqoWCKnXUmJ6DQAA4k3FvQ>: sieve: msgid=<6252b1ee.1c69fb81.8a4e2.f847@xxxxxxxxxxxx>: fileinto action: stored mail into mailbox 'Spam'

[rspamd log]
2022-04-10 15:12:09 #9025(controller) <820d3e>; csession; rspamd_task_process: learn error: all learn conditions denied learning spam in default classifier

Check dovecot related files for up-to-date example/version:/etc/dovecot /usr/local/lib/dovecot

Checking potential mail server issues with SASL/PAM and Rainloop

I recently encountered the two following issues that exist more or less out of the box.

SASL/PAM: beware of empty passwords

On my mail servers, authenticated SMTP exists for the domains users. It works using sasl2-bin through PAM.

If you happen to have empty password for some unix accounts on this server, like:

thisuser::17197:0:99999:7:::

You would except the lack of password (like the root account can be on LXC container, after creation) to prevent login. It won’t, it will accept a blank password as valid authenticated login.

In /etc/pam.d/common-auth you can find an explanation:

auth [success=1 default=ignore] pam_unix.so nullok

it will effectively make (at least on current Devuan/Debian stable) the server an open relay for thisuser.

Some people reported it already and there are spambots around that specifically use root with blank password (without doing beforehand any other stupid attempt to misuse the server).

Here is a workaround, simply making sure no user got a blank password field in /etc/shadow :

usermod -p '!' thisuser

Rainloop: unmaintained, existing security issues

It you use the webmail Rainloop, be aware that is seems to be no longer maintained, with several attempt by differents people to warn the team about security issues left unanswered.

While we can hope that nothing tragic happening in life of the people behind the software causing this odd silence, the best course of action for now seems to be switching to the Snappymail fork, which includes security fixes and that is actively developed.

Preventing stored files deletion without assigning to root or using chattr

To prevent accidental or malicious files deletion (for instance a local collection of images, or a collection of movies on a Samba server), one option is to grant the directory to root or use chattr to make these files immutables (which also require root privileges).

That works. But any further modification would then require root privileges.

The proposed approach is, instead, to change ownership of files that reached a certain age (one week, one month or one year) to a dedicated “read-only” user, in a way that usual users can still add new files in the collection directories but no longer remove the old ones to safekeep.

This is not opposed to backups or filesystem snapshots, it is a step to prevent data loss instead of curing it.

Say on your video storage library on a samba server, files added by guest users are forcibly assigned to nobody:smbusers. You would then create a dedicated nobody-ro user and would configure in /etc/read-only-this.conf the library path to be handled by the read-only-this.pl script. Run by a daily cronjob, the read-only-this.pl script would reassign all files older than say one week to nobody-ro:smbusers, with no write group privilege. Directories would get the special T sticky bit so samba guest users would still be able to add new files but not remove old ones.

It would be possible to actually allow nobody-ro to log in through samba, shell or whatever scripts, to enable file removal or directory reorganisation. But the video storage library is protected from mistakes from regular users or malicious deletion by scripts using regular users accounts.

(Note that the read-only-this.pl script cares only for video/image/audio/documents mime types – the mime-type selection might later be added as configuration option)

It is included in the rien-common package.

Sorting and moving automatically videos from download to storage directory

In the spirit the earlier scripts to clean up ogg/mp3 collection (tags, filenames) with lltag, the following script is a proposal to automatically move, from download directory to storage directory, the video files that deserved to be kept. This is especially useful when both directories are on different physical drives and, as such, take a while, and typically with implies heavy IO usage at the moment you are actually using the computer that host the drivers.

The idea is to put a simple mark on directories or files, in the download area, that should be move to the storage area, assuming storage area contains top directories to sort files. The mark is arbitrary: ##Mark##, Mark matching a top directory within the download area.

For example, say we have in the download area the file Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.avi, associated with a subtitle file Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.en.srt, within a directory named Vabank II. It must go in storage area top directory Action Espionnage.

The way the script sort-download-area.pl works requires only the Vabank II or Vabank II, czyli riposta (1985) DVDRip XviD AC3-BR.avi to be renamed to include ##Action Espionnage##. And, obviously, to make it more practical, if can be also ##Action##, ##action##, ##Espionnage##, ##AE##, ##ActionEspionnage##, and others aliases as long as they are not confusing regarding other top directories of the download area.

Then running sort-download-area.pl --download /mnt/download --storage /mnt/storage (assuming these are the relevant directories) will take care of moving the found video and text/subtitles files (based on mime-type and filenames). The old directory will remain, the script won’t take any risk to erase any data by itself.

It can be run with --debug option to make a dry-run, to check if everything is in order, list possible marks, etc. If run as root, it will take care of changing mode and ownership to match the relevant download area top directory.

Once a satisfying setup is in place (assuming the script is in /usr/local/bin), it is enough to add a /etc/cron.daily/sort-download-area like:

#!/bin/sh
/usr/local/bin/sort-download-area.pl  --download /mnt/download --storage /mnt/storage

Here the current version of the sort-download-area.pl (but you are advised to always take the latest gitlab version) :

#!/usr/bin/perl

use strict "vars";
use Fcntl ':flock';
use POSIX qw(strftime);
use File::Find;
use File::Basename;
use File::Path qw(make_path);
use File::Copy qw(move);
use File::MimeInfo;
use File::Slurp qw(read_dir);
use Getopt::Long;
use Term::ANSIColor qw(:constants);

# config:
my $user = "nobody";
my $group = "nobody";
my ($download, $storage);
my $debug = 0;
my ($getopt, $help);

# get standard opts with getopt
eval {
    $getopt = GetOptions("debug" => \$debug,
			 "help" => \$help,
			 "download|d:s" => \$download,
			 "storagedir:s" => \$storage);
};

if ($help) {
    print STDERR <<EOF;
Usage: $0 [OPTIONS]

    -d DIR, --download DIR   (mandatory) path to the download/input area
    -s DIR, --storage DIR    (mandatory) path to the storage/output area

    --debug                  Dry-run debug test


Author: yeupou\@gnu.org
       https://yeupou.wordpress.com/

EOF
    exit(1);    
}

unless ($download and $storage) {
    die "Both --download INDIR and --storage OUTDIR must be provided.\nExiting";
}
unless (-d $download and -d $storage) {
    die "Both $download (--download) and $storage (--storage) must exists.\nExiting";
}

sub debug {
    return unless $debug;
    print $_[1] if $_[1]; 
    print $_[0];
    print RESET if $_[1];
    print "\n";
}

########################################################################
## run

# silently forbid concurrent runs
# (http://perl.plover.com/yak/flock/samples/slide006.html)
open(LOCK, "< $0") or die "Failed to ask lock. Exit";
flock(LOCK, LOCK_EX | LOCK_NB) or exit;


####
#### Find out current possible storage top-dirs
#### (with their respective uid/gid)
# value equal to the real-top dir
my %storage_topdir;
# uid/gid of the real-top dir
my %storage_topdir_uid;
my %storage_topdir_gid;
# keep a list of confusing marks
my %storage_topdir_confusingmark;


debug("\n\nStorage ($storage) top-dirs:\n", ON_CYAN);

for my $dir (read_dir($storage)) {
    next unless -d "$storage/$dir";
    next if ($dir =~ /^\./);   # skip hidden dirs

    # store top dir details
    $storage_topdir{$dir} = $dir;
    $storage_topdir_uid{$dir} = (lstat "$storage/$dir")[4];
    $storage_topdir_gid{$dir} = (lstat "$storage/$dir")[5];    
    debug("\t$storage_topdir{$dir}", GREEN);
    debug("\t($storage_topdir_uid{$dir}:$storage_topdir_gid{$dir})");

    # store also top dir useful aliases (end user might want to use shortcuts)
    # but no checks will be made in case of confusing aliases (ie two top dirs shortened in the name way)
    # for instance, Action Espionnage would also accept:
    #           action espionnage (lowercased)
    #		ActionEspionnage (removal of non-word chars)
    #		actionespionnage (lowercased removal of non-word chars)
    #		AE (only capital letters)
    #           ea (lowercase only capital letters)
    #           Action (single word apart)
    #           action (lowercased single word apart)
    #           Espionnage (single word apart)
    #           espionnage (lowercased single word apart)
    

    # alias as lowercased : WesteRn eq western
    my $alias = lc($dir);
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (lowercased)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tlowercased alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
    }
    # alias with space in place of any non word characters
    $alias = $dir;
    $alias =~ s/[^[:alnum:]]//g;
    
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (removal of non-word chars)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tremoval of non-word chars alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
	
	# same lowercased
	$alias = lc($alias);
	if ($alias ne $dir) {
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (lowercased removal of non-word chars)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tlowercased removal of non-word chars alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	}
    }
    # alternatively, only keep the capitalized letters
    $alias = $dir;
    $alias =~ s/[^[:alnum:]]//g;
    $alias =~ s/[^[:upper:]]//g;
    if ($alias ne $dir) {
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (only capital letters)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tonly capital letter alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
	# same lowercased     
	$alias = lc($alias);
	unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
	    debug("\t\t$alias (lowercased only capital letter alias)");
	    $storage_topdir{$alias} = $dir;
	} else {
	    debug("\t\tlowercased only capital letter alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
	    $storage_topdir_confusingmark{$alias} = 1;
	    delete($storage_topdir{$alias});
	}
    }
    # finally, if several worlds compose a string, try to register each
    # (this is where it is most likely to find confusing aliases)
    if (split(" ", $dir) > 1) {
	foreach my $word (split(" ", $dir)) {
	    $alias = $word;
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (single word apart)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tsingle word apart alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	    $alias = lc($alias);
	    unless ($storage_topdir{$alias} or $storage_topdir_confusingmark{$alias}) {
		debug("\t\t$alias (lowercased single word apart)");
		$storage_topdir{$alias} = $dir;
	    } else {
		debug("\t\tlowercased single word apart alias ($alias) is confusing regarding earlier items, skipping", ON_RED);
		$storage_topdir_confusingmark{$alias} = 1;
		delete($storage_topdir{$alias});
	    }
	}
    }
        
}


####
#### Find out any file or directory that we should be moving
#### (do not start moving files unless we checked everything)

# build an hash of files to move
# (with a secondary hash to keep track of the storage topdir) 
my %tomove;
my %tomove_topdir;


debug("\n\nDownload ($download) files:\n", ON_CYAN);

sub wanted {
    # $File::Find::dir is the current directory name,
    # $_ is the current filename within that directory
    # $File::Find::name is the complete pathname to the file.

    # check if we have a ##STRING## inside
    my $mark;
    $mark = $1 if $File::Find::name =~ m/##(.*)##/;

    # none found, skipping
    next unless $mark;

    # string refers to non-existant directory, skipping
    unless ($storage_topdir{$mark}) {
	debug("Mark $mark found for $File::Find::name while no such storage directory exists in $storage", ON_RED);
	# this is an issue that requires manual handling, print ont STDERR
	print STDERR ("Mark $mark found for $File::Find::name while no such storage directory exists in $storage\n");
	next;
    }

    # take into account only videos and text files
    my $suffix;
    $suffix = $1 if $_ =~ /([^\.]*)$/;
    my ($mime_type,$mime_type2) = split("/", mimetype($File::Find::name));
    if ($mime_type ne "video" and
	$mime_type ne "text") {
	# second pass to allow even more text files based on extension
	# (subtitles : srt sub ssa ass idx txt smi)
		
	unless ($suffix eq "srt" or
		$suffix eq "sub" or
		$suffix eq "txt" or
		$suffix eq "ssa" or
		$suffix eq "ass" or
		$suffix eq "idx" or
		$suffix eq "smi") {
	    debug("\tskip $_ ($mime_type/$mime_type2 type)");
	    next;		
	}
    }

    my $destination_dir = "$storage/$storage_topdir{$mark}";
    my $destination_file = $_;
    $destination_file =~ s/##(.*)##//g;
    $destination_file =~ s/^\s*//;
    $destination_file =~ s/\s*$//;

    # now handle the special S00E00 case of series, like 30 Rock (2006) - S05E16 or 30 Rock S05E16
    my ($season, $before_season, $show);
    $before_season = $1 and $season = $2 if $_ =~ m/^(.*)S(\d\d)\ ?E\d\d[^\d]/i;
    if ($season) {
	# there is a season, we must determine the show name
	#    30 Rock (2006) - S05E16 => 30 Rock
	# end user must pay attention to have consistent names
	$show = $1 if $before_season  =~ m/^([\w|\s|\.|\'|\,]*)/g;
	# dots often are used in place of white spaces
	$show =~ s/\./ /g;    
	# keep only spaces in shows name, nothing else
	$show =~ s/[^[:alnum:]|\ ]//g;    
	$show =~ s/^\s*//;
	$show =~ s/\s*$//;
	# capitalize first letter
	$show =~ s/\b(\w)/\U$1/g;
	
	# if we managed to find the show name, then set up the specific series tree
	last unless $show;
	debug("found show: $show", MAGENTA);
	$destination_dir = "$storage/$storage_topdir{$mark}/$show/S$season";	    	
    }

    
    # if we reach this point, everything seems in order, plan the move
    debug("plan -> $destination_dir/$destination_file");
    $tomove{$File::Find::name} = "$destination_dir/$destination_file";
    $tomove_topdir{$File::Find::name} = $storage_topdir{$mark};


    # additionally, if we deal with a video, look for any possibly related file to add also that would not have been picked
    # otherwise
    if ($mime_type eq "video") {
	my $other_files_path = $File::Find::name;
	$other_files_path =~ s/\.$suffix$//g;

	debug("glob $other_files_path*");
	my @other_files =
	    glob('$other_files_path*.srt'),
	    glob('$other_files_path*.sub'),
	    glob('$other_files_path*.txt'),
	    glob('$other_files_path*.ssa'),
	    glob('$other_files_path*.ass'),
	    glob('$other_files_path*.idx'),
	    glob('$other_files_path*.smi');
	foreach my $file (@other_files) {
	    debug("plan -> $destination_dir/$file");
	    $tomove{"$File::Find::dir/$file"} = "$destination_dir/$file";
	    $tomove_topdir{"$File::Find::name/$file"} = $storage_topdir{$mark};
	}		    
    }

    debug();
      
}
find(\&wanted, $download);

####
#### Actually move files now
####

debug("\n\nMove from download ($download) to storage ($storage):\n", ON_CYAN);

foreach my $file (sort keys %tomove) {

    debug(basename($file), YELLOW);

    my $uid = $storage_topdir_uid{$tomove_topdir{$file}};
    my $gid = $storage_topdir_gid{$tomove_topdir{$file}};    

    # create directory if needed
    my $dir = dirname($tomove{$file});
    unless (-e $dir) {
	make_path($dir, { chmod => 0770, user => $uid, group => $gid }) unless $debug;
	debug("make_path $dir (chmod => 0770, user => $uid, group => $gid)");
    }

    # then move the file (chown if root)
    # avoid overwriting, add number in the end, no extension saving
    my $copies;
    if (-e $tomove{$file}) {
	while (-e "$tomove{$file}.$copies") {
	    $copies++;
	    # stop at 10, makes no sense to keep more than that amount of copies
	    last if $copies > 9;
	}
    }    
    $tomove{$file} .= ".$copies" if $copies;
    move($file, $tomove{$file}) unless $debug;
    chown($uid, $gid, $tomove{$file}) unless $debug or $< ne 0;
    debug("$file -> $tomove{$file}");

    debug();	  
}

# EOF

Moving a live encrypted system from one hard disk to another

This a short memento based on earlier articles Moving a live system from one hard disk to another and http://Single passphrase to boot Devuan GNU/Linux with multiple encrypted partitions. This is useful when you start to move over your systems partitions from HDD to SSD, that nowadays are clearly worth their cheap price.

This article is made for Devuan GNU/Linux but should not be distro specific – you just might want to replace devuan string in later command by something else.

We start by setting some variables depending on the relevant drives. Any doubt about which drive is what, running lsblk should help.

# new NVMe disk 
NDISK=/dev/nvme0n1
NPART_PREFIX=p

# or alternatively for a SATA new disk:
# NDISK=/dev/sdb
# NPART_PREFIX=

LUKS_PREFIX=250g21

(AHCI SATA SSD are faster than HDD, but AHCI itself will be the bottleneck, so I’d suggest to install a NVMe SSD if your mainboard allows).

# key necessary to mount all partitions with a singlepassphrase

key=/boot/klucz
if [ ! -e $key ]; then dd if=/dev/urandom of=$key bs=1024 count=4 && chmod 400 $key ; fi

Next step is to replicate the disk structure. While this article is BIOS-boot based, it should go along UEFI:

parted $NDISK

(parted shell)
mklabel gpt
mkpart biosreserved ext2 1049kB  50,3MB
toggle 1 bios_grub
mkpart efi fat32 50,3MB  500MB
toggle 2 msftdata
mkpart swap linux-swap 500MB   16,0GB
toggle 3 swap
mkpart root ext4 16,0GB 250GB
print

Model: KINGSTON SA2000M8250G (nvme)
Disk /dev/nvme1n1: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name          Flags
 1      1049kB  50,3MB  49,3MB                  biosreserved  bios_grub
 2      50,3MB  500MB   450MB                   efi           msftdata
 3      500MB   16,0GB  15,5GB  linux-swap(v1)  swap          swap
 4      16,0GB  250GB   234GB   ext4            root

quit

We build the new system partition (luks1 is still mandatory with grub, but that won’t be true forever we can suppose):

cryptsetup luksFormat --type luks1 "$NDISK$NPART_PREFIX"4
cryptsetup luksOpen "$NDISK$NPART_PREFIX"4 "$LUKS_PREFIX"devuan64
cryptsetup  luksAddKey "$NDISK$NPART_PREFIX"4 $key
mkfs.ext4 /dev/mapper/"$LUKS_PREFIX"devuan64 -L "$LUKS_PREFIX"devuan64

mkdir /tmp/"$LUKS_PREFIX"devuan64
mount /dev/mapper/"$LUKS_PREFIX"devuan64 /tmp/"$LUKS_PREFIX"devuan64

ignore="backups home dev lost+found media proc run sys tmp"
for dir in $ignore; do touch /$dir.ignore ; done && for dir in /*; do if [ -d $dir ]; then if [ ! -e $dir.ignore ]; then /usr/bin/rsync --archive --one-file-system --delete $dir /tmp/"$LUKS_PREFIX"devuan64/ ; else if [ ! -e /tmp/"$LUKS_PREFIX"devuan64/$dir ]; then mkdir /tmp/"$LUKS_PREFIX"devuan64/$dir; fi ; rm $dir.ignore ; fi ; fi ; done

We update required system files:

echo " " >> /tmp/"$LUKS_PREFIX"devuan64/etc/crypttab
echo "# "`date` >> /tmp/"$LUKS_PREFIX"devuan64/etc/crypttab
echo "$LUKS_PREFIX"devuan64 UUID=`blkid -s UUID -o value "$NDISK$NPART_PREFIX"4` $key luks,tries=3,discard >> /tmp/"$LUKS_PREFIX"devuan64/etc/crypttab
echo "$LUKS_PREFIX"swap `find -L /dev/disk -samefile "$NDISK$NPART_PREFIX"3 | grep by-id | tail -1` /dev/urandom cipher=aes-xts-plain64,size=256,swap,discard >> /tmp/"$LUKS_PREFIX"devuan64/etc/crypttab

# comment out old lines
nano /tmp/"$LUKS_PREFIX"devuan64/etc/crypttab


echo " " >> /tmp/"$LUKS_PREFIX"devuan64/etc/fstab
echo "# "`date` >> /tmp/"$LUKS_PREFIX"devuan64/etc/fstab
echo "/dev/mapper/"$LUKS_PREFIX"devuan64	/		ext4	errors=remount-ro		0 1" >> /tmp/"$LUKS_PREFIX"devuan64/etc/fstab
echo "/dev/mapper/"$LUKS_PREFIX"swap	none		swap	sw		0 0" >> /tmp/"$LUKS_PREFIX"devuan64/etc/fstab

# comment out old lines
nano /tmp/"$LUKS_PREFIX"devuan64/etc/fstab



echo "Make sure this is in grub config:"
echo
echo GRUB_CMDLINE_LINUX=\"rd.luks.key=$key:UUID=`blkid "$NDISK$NPART_PREFIX"4 -s UUID -o value`\"
echo GRUB_ENABLE_CRYPTODISK=\"y\"
echo GRUB_PRELOAD_MODULES=\"luks cryptodisk lvm\"

# update grub config
nano /tmp/"$LUKS_PREFIX"devuan64/etc/default/grub

Last step is to install the boot loader on the new disk:

mount --bind /dev /tmp/"$LUKS_PREFIX"devuan64/dev
mount --bind /sys /tmp/"$LUKS_PREFIX"devuan64/sys
mount -t proc /proc /tmp/"$LUKS_PREFIX"devuan64/proc
chroot /tmp/"$LUKS_PREFIX"devuan64

grub-mkdevicemap
update-initramfs -u

# need to be retyped since it not in chroot environment
NDISK=/dev/nvme0n1

grub-install $NDISK
grub-mkconfig > /boot/grub/grub.cfg

That’s all.

Using PowerDNS (server and recursor) with DNSSEC and domain name spoofing/caching

update, October 2021: I consider stopping using PowerDNS. I do not want to rely on unreliable people (debian bug #997054 and debian bug #997056). update, February 2023: done with Knot DNS and Knot Resolver!

I updated my earlier PowerDNS (server and recursor) setup along with domain name spoofing/caching. This is a short update to allow DNSSEC usage and unlimited list of destinations from spoof/cache list. Files described here can be found on gitlab.

DNSSEC

Adding DNSSEC support can done easily by creating /etc/powerdns/recursor.d/10-dnssec.conf :

# dnssec	DNSSEC mode: off/process-no-validate 
#                    (default)/process/log-fail/validate
dnssec=validate

#################################
# dnssec-log-bogus	Log DNSSEC bogus validations
dnssec-log-bogus=no

Every local zone must be excluded by adding to /etc/powerdns/recursor.lua :

addNTA("10.10.10.in-addr.arpa", "internal zone")

New redirect.lua renewal script

Earlier version provided a static /etc/powerdns/redirect.lua which was depending on with redirect-cached.lua, redirect-ads.lua and redirect-blacklisted.lua, which contained lists of domains to either blacklist (meaning: redirected to loopback) or spoof.

Now, the script redirect-rebuild.pl use the configuration redirect-spooflist.conf to generate redirect.lua. The ads blacklist part is unchanged.

The configuration syntax is as follow:

# IP:	domain domain 

# redirect thisdomain.lan and thisother.lan to 192.168.0.1,
# except if 192.168.0.1 is asking 
192.168.0.1: thisdomain.lan thisother.lan 

# redirect anotherthisdomain.lan and anotherthisother.lan to 10.0.0.1,
# even if 10.0.0.1 is asking 
10.0.0.1+:    anotherthisdomain.lan anotherthisother.lan 

# you can use 127.0.0.1: to blacklist domains

It is enough to run the redirect-rebuild.pl script and restart the recursor:

use strict;
use Fcntl ':flock';

my $spooflist = "redirect-spooflist.conf";
my $ads_lua = "redirect-ads.lua";
my $ads_pl = "redirect-ads-rebuild.pl";
my $main_lua = "redirect.lua";

# disallow concurrent run
open(LOCK, "< $0") or die "Failed to ask lock. Exiting";
flock(LOCK, LOCK_EX | LOCK_NB) or die "Unable to lock. This daemon is already alive. Exiting";

# first check if we have a ads list to block
# if not, run the local script to build izt
unless (-e $ads_lua) {
    print "$ads_lua missing\n";
    print "run $ads_pl\n" and do "./$ads_pl" if -x $ads_pl;
}

my %cache;
# read conf
open(LIST, "< $spooflist");
while (<LIST>) {
    next if /^#/;
    next unless s/^(.*?):\s*//;
    $cache{$1} = [ split ];
}
close(LIST);

# build lua
open(NEWCONF, "> $main_lua");
printf NEWCONF ("-- Generated on %s by $0\n", scalar localtime);
print NEWCONF '-- IPv4 only script

-- ads kill list
ads = newDS()
adsdest = "127.0.0.1"
ads:add(dofile("/etc/powerdns/redirect-ads.lua"))

-- spoof lists
';

foreach my $ip (keys %cache) {
    # special handling of IP+, + meaning we spoof even to the destination host
    my $name = $ip;
    $name =~ s/(\.|\+)//g;  
    print NEWCONF "spoof$name = newDS()\n";
    print NEWCONF "spoof$name:add{", join(", ", map "\"$_\"", sort@{$cache{$ip}}), "}\n";
    $ip =~ s/(\+)//g;
    print NEWCONF "spoofdest$name = \"$ip\"\n";
}

print NEWCONF '
function preresolve(dq)
   -- DEBUG
   --pdnslog("Got question for "..dq.qname:toString().." from "..dq.remoteaddr:toString().." to "..dq.localaddr:toString(), pdns.loglevels.Error)
   
   -- spam/ads domains
   if(ads:check(dq.qname)) then
     if(dq.qtype == pdns.A) then
       dq:addAnswer(dq.qtype, adsdest)
       return true
     end
   end
    ';

foreach my $ip (keys %cache) {
    my $always = 0;
    $always = 1 if ($ip =~ s/(\+)//g);     # + along with IP means always spoof no matter who is asking
    my $name = $ip;
    $name =~ s/\.//g;

    print NEWCONF '
   -- domains spoofed to '.$ip.'
   if(spoof'.$name.':check(dq.qname)) then';
    print NEWCONF '
     dq.variable = true
     if(dq.remoteaddr:equal(newCA(spoofdest'.$name.'))) then
       -- request coming from the spoof/cache IP itself, no spoofing
       return false
     end' unless $always;
    print NEWCONF '   
     if(dq.qtype == pdns.A) then
       -- redirect to the spoof/cache IP
       dq:addAnswer(dq.qtype, spoofdest'.$name.')
       return true
     end
   end
	';
}

print NEWCONF '
   return false
end
';
close(NEWCONF);


# EOF

Dealing with Nextcloud broken oc_flow_operations table after 19 upgrade

I finally upgraded my Nextcloud server, using nanogallery2 instead of relying on the new brand Photos app, since it does not provide a simple way to share a gallery and is clearly not about to be resolved (the developers that in first place considered there was no problem to fix finally said that people were too harsh try to convince them otherwise so they finally lost interest in fixing it). Well, in any case, using static files for this purpose is much more efficient.

During the upgrade, I hit issue with the oc_flow_operations table that is mentioned in numerous reports (the whole first page of google search).

I first fixed it by, as suggested by some users, adding a missing column. That allowed to finish the upgrade but something was still off, with the logfile growing to a few MB every minutes.

Finally, recreating the whole table fixed it for me:

cat /etc/mysql/debian.cnf | grep password
mysql -udebian-sys-maint -p
connect nextcloud;
DROP table oc_flow_operations ;
CREATE table oc_flow_operations (id   int(11) NOT NULL  AUTO_INCREMENT PRIMARY KEY, class  varchar(256), name     varchar(256) NOT NULL, checks   longtext, operation longtext,entity  varchar(256),events longtext);
su www-data -s /bin/bash
php occ maintenance:repair

Fetching mails from gmail.com with lieer/notmuch instead of fetchmail

Google is gradually making traditional IMAPS access to gmail.com impossible, in it’s usual opaque way. It is claimed to be a security plus, though, if your data is already on gmail.com, it means that you are already accepting that it can and is spied on, so whether the extra fuss is justified is not so obvious.

Nonetheless, I have a few secondary old boxes on gmail, that I’d like to still extract mails from, not to miss any, without bothering connecting to with a web browser. But the usual fetchmail setup is no longer reliable.

The easiest alternative is to use lieer, notmuch and procmail together.

apt install links notmuch lieer procmail

# set up as regular user
su enduser

boxname=mygoogleuser
mkdir -p ~/mail/$boxname
notmuch
notmuch new
cd ~/mail/$boxname
gmi init $boxname@gmail.com

# at this moment, you'll get an URL to connect to gmail.com 
# with a web browser
# 
# if it started links instead, exit cleanly (no control-C)
# to get the URL
#
# that will then return an URL to a localhost:8080
# to no effect if you are on a distant server
# in this case, just run, in an another terminal
links "localhost:8080/..."
# or  run `gmi --noauth_local_webserver auth`  and use the URL
# on another browser on another computer

The setup should be ok. You can check and toy a bit with the setup:

gmi sync
notmuch new
notmuch search tag:unread
notmuch show --format=mbox thread:000000000000009c

Then you should mark all previous messages as read:

for thread in `notmuch search --output=threads tag:unread`; do echo "$thread" && notmuch tag -unread "$thread" ; done
gmi push

Finally, we set up the mail forward (in my case, the fetch action is done in a dedicated container, so it is forwarded to ‘user’ by SMTP to the destination host, but procmail allows any setup) and fetchmail script: each new unread mail is forwarded and then marked as read:

echo ':0
! user' > ~/.procmail

echo '#!/bin/bash

BOXES=""

# no concurrent run
if pidof -o %PPID -x "fetchmail.sh">/dev/null; then
        exit 1
fi

for box in $BOXES; do
    cd ~/mail/$box/
    # get data
    gmi pull --quiet
    notmuch new --quiet >/dev/null
    for thread in `notmuch search --output=threads tag:unread`; do
	# send unread through procmail
	notmuch show --format=mbox "$thread" | procmail
	# mark as read
	notmuch tag -unread "$thread"
    done
    # send data
    gmi push --quiet
    cd
done

# EOF' > ~/fetchmail.sh
chmod +x ~/fetchmail.sh

Then it is enough to call the script (with BOXES=”boxname” properly set) and to include it in cronjob, like using crontab -e`

notmuch does not remove mails.