Isn’t SRS breaking SPF itself, at least regarding spam?

Earlier on this blog, I proposed ways to implement SPF (Sender Policy Framework). I recently noticed mails forwarded by one of my servers being tagged as spam by gmail.com due to SPF checks. It means that while SPF works for my domains with near to 0 user base, no real business of forwarding, it is a nuisance for forwarding in general. So you are advised to use SRS (Sender Rewriting Scheme). Strangely enough it is not fully integrated on main servers and some implementation (Exim in Debian) are based on unmaintained library (SRS C library).

Unmaintained?

Fact is SRS is far from being nice. It makes so your own forwarding server is vouching for fowarded mails. But why would you want that?

SPF test will fail because your forwarding server is not a registered valid source for (forwarded) mails sent from domain X. SRS proposal is that your server will alter header so to forward the mail from X domain X to appear as sent from an address of your own domain for you server is a registered valid source.

I guess the logic is to make forwarders somehow responsible of filtering, not bad in principle.

But it also means that for each spam forwarders fail to identify, they’ll be tagged as spam originator. It is particulary annoying when forwarding is made on public addresses bound to attract spam. So it seems better to get a failed SPF test on every forwarded messages including valid ones than a valid SPF test on every forwarded messages including spam.

SPF without SRS breaks forwarding. But SPF with SRS, the workaround, breaks SPF itself regarding spam and will give you (your IPs, your domains) bad rep, with will make your legit mail at risk of being blacklisted, unless you apply an overly harsh policy on forwarded mails.

Annoying. I am thinking removing SPF completely, instead.  For now, I am updating my SPF records to remove any Fail statement, since there is no way for me to know whether one of my mail can legitimately be forwarded through several servers.  Funny enough, google that promotes SPF usage recommends using SoftFail over Fail. But I might even reset to Neutral.

Interesting link on topic : Mail server setup wih SRS ; Why not SPF?

Alternative: I implemented DKIM on my servers. Seems much smarter to have a server signature.

Advertisements

Receiving and parsing DMARC reports on main and secondary mail servers

DMARC is made so aggregated reports will be sent by others servers to yours in order for you to find out if some spam source are trying to impersonate your domains. DMARC working on top of SPF and DKIM, so you need at least one of these. I already mentioned SPF here (even though my setup changed since then, since I use “v=spf1 mx a -all” as SPF DNS record now). If you have neither SPF or DKIM, I suggest you take a look at mail-tester.com first.

The destination of these reports os set in your DMARC DNS record , something like:

v=DMARC1; p=quarantine; rua=mailto:dmarc@thisdomain; sp=quarantine; ri=604800

Unfortunately, these reports are XML and frequent. There are not made to be read by human. And not so many solutions to parse and aggregate these are available; not so surprisingly since lot of customer services based on this are provided by companies.

I do not require anything overly fancy, here’s my basic setup satisfying my modest needs: I only need this to work on two twin mail servers (mxA and mxB here), no matter if one is down.  Since reports are sent by mail, so to the first server that’ll accept them, they need to be replicated from/to mxA to/from mxB.

On each server, create dmarc user. Then create home subdirectories:

adduser dmarc
su dmarc
cd 
mkdir mxA mxB

As suggested earlier, add dmarc user as recipient for the reports in the DNS record:

_dmarc 10800 IN TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@thisdomain; sp=quarantine; ri=604800"

Install the parser, filling database

On each servers, install dmarcts-report-parser.

# included in my -utils-exim deb package but you can simply clone it:
git clone https://github.com/techsneeze/dmarcts-report-parser.git dmarcts-report-parser
cp dmarcts-report-parser/dmarcts-report-parser.pl /usr/local/bin
chmod +x /usr/local/bin/dmarcts-report-parser.pl

# copy conffile (assuming we're on mxA)
cp dmarcts-report-parser/dmarcts-report-parser.conf.sample /home/dmarc/mxA/dmarcts-report-parser.conf

# requires some libraries
apt-get install libmail-imapclient-perl libmime-tools-perl libxml-simple-perl libclass-dbi-mysql-perl libio-socket-inet6-perl libio-socket-ip-perl libperlio-gzip-perl libmail-mbox-messageparser-perl unzip

The conffile needs to be modified, most notably:

$delete_reports=1
$delete_failed = 1;

You also need a MySQL server with  dmarc database:

apt-get install mariadb-server
mysql -e "CREATE DATABASE dmarc"

From this moment, the command cd /home/dmarc/mxA/ && dmarcts-report-parser.pl -m *.mbox should already be effective. But you need to fill these mbox files. We’ll do so with a simple ~/.procmailrc as follows:

HERE=mxA
NOTHERE=mxB

DATE=`date +%Y%m%d`
DEFAULT=$HOME/$HERE/$DATE-$HERE.mbox
CLONE=$HOME/$NOTHERE/$DATE-$HERE.mbox

LOGFILE=$HOME/received-reports.log
:0c:
$CLONE

:0:
$DEFAULT

For log rotation, we create a specific ~/.logrotaterc as follows:

/home/dmarc/received-reports.log {
 weekly
 rotate 4
 compress
 missingok
}

Finally, it is just a matter of adding a cronjob so the script looks for .mbox on daily basis (and to rotate logs), crontab -e :

# feed database
15 4 * * * cd ~/mxA/ && dmarcts-report-parser.pl -m *.mbox 2>/dev/null >/dev/null
# rotate logs
0 5 * * * /usr/sbin/logrotate /home/dmarc/.logrotaterc --state /home/dmarc/.logrotate.state

Replicating data

Now, we to set up some ssh access. Only on mxA:

ssh-keygen -t rsa
cat .ssh/id_rsa.pub

The output of the cat command just have to be copied on mxB in ~/.ssh/authorized_keys

Again on mxA, we set up the cronjob doing the actual copy (and removal after copy) with crontab -e

0 3 * * * rsync --quiet --ignore-missing-args --include '*.mbox' --exclude '*' -e "ssh -p 22" mxB.thisdomain:~/mxA/* ~/mxA/ && rsync --quiet --ignore-missing-args --include '*.mbox' --exclude '*' -e "ssh -p 22" ~/mxB/ mxB.thisdomain:~/mxB/* && rm -f ~/mxB/*.mbox && ssh -p 22 mxB.thisdomain 'rm -f ~/mxA/*.mbox'

 

Viewing reports

On and http server configure to run PHP files, install dmarcts-report-parser.

cd /srv
git clone https://github.com/techsneeze/dmarcts-report-viewer.git dmarc
cd dmarc
ln -s dmarcts-report-viewer.php index.php

Eye-candy wise, it is perfectible – but the data is there.

Avoiding Spams with SPF and greylisting within Exim

A year ago, I posted an article describing a way to slay Spams with both Bogofilter and SpamAssassin embedded in exim. This method was proven effective for my mailboxes:  since then, during a timespan of one year, Bogofilter caught ~ 85 % of actual spams, SpamAssassin (called only if mail not already flagged defavorably by Bogofilter) caught ~ 15 %. Do the math, I had almost none to flag by hand.

Why would I change such setup? For fun, obviously 🙂

Actually, I made no change, I just implemented SPF (Sender Policy Framework) and greylisting.

I noticed that plenty of spams were sent to my server @thisdomain claiming to be sent by whoever@thisdomain. These dirty spams were easily caught by the duo Bogofilter / SpamAssassin, but still, it annoyed me that @thisdomain was misused. SPF allows, using DNS records, to list which servers/computers are allowed to send mails from addresses @thisdomain.   SPF checks are predefined in Exim out of the box, so I’ll skip its configuration. The relevant DNS record (with bind9), allowing only two boxes (primary and secondary mail servers) designated by their IP to send mails @thisdomain, looks like:

thisdomain. IN  TXT  "v=spf1 ip4:78.249.xxx.xxx ip4:86.65.xxx.xxx -all"

Result: Since I implemented SPF on my domains, there was no change in the number of spam caughts. However, during this period, my primary server list of temporary bans dropped from 200/100 IPs to 40/20 IPs. I cannot pinpoint with certainty the cause of this evolution because the temporary bans list depends on plenty of things. But, surely, pretending to send mails @thesedomainsgrilledbySPF surely lost some interest for spambots. Implementing SPF is actually not about helping ourselves directly but indirectly: reducing effectiveness of spambots helps everybody.

I use greylisting on my secondary mail server since a while and I noticed over years that this one almost never had to ban IPs. Not that he never received spam, but that he almost never received mails from very obvious spam sources identified at STMP time.  Seems that most very obvious spam sources never insist enough to pass through greylisting. I guess that most spambots are coded to skip any mail server that does not immediatly accept a proper SMTP transaction, because it has no time to waste, considering how little is the percentage of spams sent actually reaching someone real.

This greylisting use the following files (an assumes memcached and libcache-memcached-perl are properly installed):

So I gave a try using greylist my primary mail server, but with a very short waiting time, because 5 minutes, for example, to receive mail from a not-yet-known  source is not acceptable. So I edited the relevant conf.d/main/ file to GREY_MINUTES = 0.5 and GREY_TTL_DAYS = 25.

Result: no changes regarding the number of caught spams. However, like on the secondary mail server, the number of banned IPs is near to none. Looks like most obvious spam sources don’t wait even only 30 seconds – actually, it’s a very acute choice as they would be anyway banned if they did.