Sunday, October 23, 2016

Software Defined Radio and ADS-B with RTL-SDR

I recently participated in an introduction to Software Defined Radio (SDR). The point of the introduction was to get an RTL-SDR device running and then check out ADS-B. This was a load of fun and I've pulled together here a few notes to document my experience as a newbie.

The notes here focus on the software that worked for me, with particular focus on getting meaningful ADS-B data.

What is SDR?

Software Defined Radio (SDR) has advantages over traditional radio concepts by farming aspects of signal processing to software, rather than requiring expensive hardware. Additionally, SDR provides a practical solution to handling tracts of frequencies at the same time.

What is RTL?

RTL is shorthand for the affordable and apparently versatile Realtek 2832U chipset found in the SDR dongles referenced in this blog. See more information on the website.

What is ADS-B?

The introduction I attended was to demonstrate the RTL-SDR as a radio receiver and then move to listening in on ADS-B data from aircraft. This data includes flight information like where the aircraft is in the air, how fast it's moving, flight number, an ICAO 24-bit transponder code and so on.

Is Listening to ADS-B legal?  

Apparently so and commonly done, since services like FlightRadar24 openly track the same data. It is probably safe to say that it is not illegal to listen to anything on the radio spectrum since it is in the public domain

What RTL-SDR Should I Buy?

Two weeks before the SDR introduction was scheduled, I purchased a NooElec NESDR SMArt - Premium RTL-SDR via Amazon but the thing never showed up. At the last minute I purchased instead the NooElec NESDR Mini USB RTL-SDR (pictured right), which arrived the next day (thanks Amazon Prime). The RTL appears to be the same in both, the difference is that the former device had three aerials and an SMA input. The SMA is supposedly more versatile, but adapters are available. The aerial on the latter and cheaper unit isn't fabulous, but for my initial purpose it worked well.

Software - Windows

First off, if you bought a NooElec device, you'll need to follow the instructions for installing the correct drivers under Windows. There's another nice quick-start guide here.

SDRSharp | SDR#

Under Windows I was easily able to use the SDR Sharp (SDR#) software to tune into FM radio stations and amateur frequencies. It seems to be the general-purpose Windows radio software.

The important config points are that you select RTL-SDR from the Source and then select the cog icon at top to bring up the Device settings (right).

There is also HDSDR and CubicSDR that appear to do the same job as SDR#. I did play around with them, but SDR# is the one I primarily use under Windows.

The moment that you need to do anything interesting, however, it seems that you need to switch to Linux. I can confirm that on Windows 10, dump1090 works very nicely with VirtualRadar and with almost no configuration required. This post isn't going to discuss setting all that up, it's well enough documented on the website.

Software - Linux

For a Windows-only person, this isn't as daunting as it sounds. You can run Linux as a Virtual Machine on your Windows system. A number of steps need to be undertaken to get that going. This starts with installing VirtualBox or VMware Player, both are free downloads. VirtualBox seems like less hassle but I find VMware Player more flexible and useful. For this situation, VirtualBox is fine.

Also remember to connect the RTL device to the Virtual Machine (VMware screenshot):

Once you have a Linux system running, either as a Virtual Machine or natively installed you might find the following packages useful to install (these are debian / ubuntu packages):
$ sudo apt-get install git cmake libqt5core5 libqt5core5a libqt5dbus5 libqt5gui5 libqt5network5 libqt5svg5 libqt5widgets5 qt5-default gnuradio-audio gnuradio gnuradio-dev gnuradio-dev libgnuradio-audio3.7.10 open-vm-tools rtl-sdr gqrx-sdr


gqrx appears to be the Linux standard for Windows' SDR#. Naturally there's nothing new I can add to the existing config docs, but here is how I configured it for RTL-SDR:


This application should automatically pick up the RTL-SDR device and tune into the ADS-B frequency, streaming the data to a network port that another application can connect to. I recommend something like this, which keeps the application alive and connects to a port (this example 10000) that you define:
$ while :; do rtl_adsb | netcat -lp 10000; done
We will use VirtualRadar to read the data.

dump1090 is an alternative to rtl_adsb, but as it is not part of the default Debian package sources I had trouble getting it compiled (in fact I got it running well after the workshop).

Note also that ADSBSpy, which comes with SDRSharp did not not seem to be RTL-SDR compatible.


VirtualRadar will take the ADS-B data and render it against a Google Map. VirtualRadar doesn't care what the data source is, just so long as the source provides data that VirtualRadar can parse.

VirtualRadar runs well under Linux although requires 'mono' to run:
$ sudo apt-get install mono-complete
$ mkdir VirtualRadar
$ cd VirtualRadar/
$ wget
$ tar xvf VirtualRadar.tar.gz
$ mono VirtualRadar.exe
Pay attention to the VirtualRadar.exe.config edit that you may need to make described here.

The main panel looks something like the screenshot below. One clicks on the http link to see the Google Map with the detected planes mapped against it, but remember that your PC/laptop requires a web connection to render the Google Map:

Setting up the right "receiver" information is critical. Below is a configuration screenshot when using rtl_adsb as the data source. Notice that the Port 10000 is that which is used in the rtl_adsb example above.

What to Expect

If you live near an airport, even with an imperfect antenna you should receive 1090MHz signals easily. However, as you get further from an airport your chances of picking it up lessen. The ADS-B data is almost impossible to pick up indoors, it really favours an antenna located outside and line-of-site.

If you want to build a perfect antenna for ADS-B, check this site out:

You Gotta Keep 'em Calibrated

Listening to a signal from a known precise station will tell you how far off your SDR is. My father, an amateur radio operator who was visiting me at the time, used an APRS channel to ascertained a -10,000Hz offset on my RTL-SDR. This will vary from device to device. He recommended amateur radio stations as reliable sources because they have a somewhat self-regulated standard of precision. I guess you could also calibrate off a known FM radio station if you were sure of its exact frequency.

What Next?

Good question. I have more to investigate. Stay tuned ;-)

Wednesday, September 14, 2016

What's With Your Identity Crisis? Or: How to Lose Friends in One Blog Post

In the last while, I've found my Facebook friend list has been hijacked by an assortment of randomly named strangers. People pop up in my feed who I have no recollection of befriending or even knowing. After some investigation, I always discover that the new stranger is someone I know, but has lately adopted an obscure nom-de-plume.
Now U is invisibles (I hope I don't get sued for this)

As someone who has at least a grasp on how this Internet magic works, it depresses me that people believe they can escape the purview of social media giants and advertisers by simply changing their name. What people need to understand is that their name is right down on the list of things that Facebook is interested in.

For sure, there are reasons why someone might make such a change in order to escape a stalker, abusive partner or workplace. On the rare occassions I've seen that happen, the person involved has closed their account entirely and started afresh with a pseudonymous account. Aside from those people, everyone else seems to be doing this under an assumption that changing your online name somehow makes you impossible to track.

Advertisers spanning all the popular social media platforms are tracking you regardless of whether you even have an account. How? Many ways, including Cookies, cross-domain (website) tracking, and misguided APIs like the ones that reveal your battery power or ambient light. In the old days, your browser used to ask you whether you wanted to accept a Cookie, but Cookies are now a critical part in making the online experience usable and it's rare to find a website that doesn't serve you multiple of them.

Cookies in themselves are not a bad thing. They keep you logged in, they help you fill your online shopping basket with junk, they help a website remember preferences such as language and search filters.

Here's a current Facebook disclaimer regarding Cookies, although I bet that you absent-mindedly dismissed the alert without reading it:
To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy.
"On and off Facebook" meaning Facebook Cookies will track you while you visit other sites. That's regardless of whether you're logged in to Facebook or even have an account. Not logged into Facebook? No problems, take this Cookie that will last longer than your computer/tablet/smartphone. A quick dive into your active cookie cache (yes, it's okay to call it a Cookie jar) will reveal several Facebook cookies with expiration dates 2 years into the future!

I'll say this one more time: Every popular social media platform is tracking you whether you are on or off their website and whether you have or have not got an account with them.

Listen, nobody really cares what your name is. Nobody. You arrived early, your parents were backed into a corner by adoring grandparents (and nursing staff who, after the first day, started to refer to you as Little Baby No Name) and you got what you got.

Ever wondered why Facebook is so concerned about celebrating your birthday and anniversaries, finding out what books you've read and the bands you like and fall over themselves to help you geolocate your photos? It's not because you're a special flower with a beautiful name.

If you really want to confuse or at least slow down Facebook, there are techniques available. They are tedious and irritating. Think about it for a second, do you want to see adverts regarding "beauty secrets that doctors don't want you to know about"? Do you enjoy being constantly nagged about your belly fat? You and Facebook have come a long way; you've invested countless hours telling Facebook what you're interested in and believe me Facebook was listening.

Once again, is really that bad to get adverts from retailers who have stuff you actually want to buy? Some entities do this well, bust most are still rotten at it; a comically bad attempt was when I was served an eBay advert for an HTC phone - it was my auction.

The same is true for Google of course. If, like me, you're alive, then most likely Google found a way to trick you into a Google+ account. Although it's their equivalent social-media product, their ad revenue stream doesn't rely on it (hey but who does rely on Google+?).

Dear Reader, Google+ aside, almost every one of you will have a Google login in some way for Gmail, Drive, Maps or some other service. Just like Facebook, or more accurately orders of magnitude more so than Facebook, Google want to know where you've been and where you're off to, they're not only interested in what you're doing while you're using their services.

It's sure nice of you to tell Google, Facebook, Twitter, LinkedIn, Instagranny and all those services about your age and gender, they certainly want that and it's interesting to your friends. But social media and advertisers are mining your email, messages and posts for data to render adverts to you. And by the way, regardless of how old you told them you were, they know if you lied because those activities betray you.

If you've carefully read, or most likely quickly scrolled down to this point for the methods to avoid tracking, then this will be the most unrewarding post you read this year. If so, please provide feedback via a Facebook and Twitter by sharing this post. Additionally, please tell your one friend on Google+ how appalling this post was and the terrible things I said about you.

If you don't want to do pure privacy properly by using Tails, then you'll have to do a half arsed attempt with the following techniques. For best results you have to do all this:
  1. Always use your browser's private / privacy mode. The browser will warn you that your ISP can still see everything you're doing.
  2. Log out of every website after each visit. Not just Facebook! Log out of retailer sites (Amazon, eBay) after each visit. Log out of everything. Clear the Cookies. There are browser plugins for managing Cookies. Managing Cookies is hard.
  3. Tell your browser to clear Cookies when you close it. This should happen by default when in privacy mode browsing. Much like point 2, it's a pain because you lose your logged in sessions. Your shopping baskets for sites that you didn't log into will be cleared out too.
  4. Unlike all the things. Don't tell social media sites about yourself. Leave all your groups, quit all the fans pages and remove all the books, movies and artists you liked, read or listened to.
  5. Don't accept cookies by default. Witness the joy of browsing without them. Did I say joy? I meant horror.
  6. Replace your mobile device or PC after each use.
No, encrypting your sessions via HTTPS and even Tor for example doesn't help hide you from advertisers. HTTPS does make it hard for the NSA or GCSQ to see what you're up to. However, if you stick with plain old unencrypted HTTP, perhaps your local Government security agency would be able to better tailor your detainment experience for greater levels of discomfort.

A few years of limited rations while in detention will certainly help with that belly fat.

In short, you should cancel your Internet connection. I don't recommend changing your name via deed poll, but go ahead if you like. The point of this post is to make you feel like privacy is impossible on the Internet. You are not smart enough to defeat these organisations and even if you do, then enjoy the belly-fat adverts.

Wednesday, August 31, 2016

Permit samba to follow symbolic links to an unshared mount

I'm posting this because if you google for the answer you end up getting out of date answers and the wrong commands.

What I wanted to do was simply make a symbolic link from my samba shared directory /storage to /home.
server:/storage# ls -l /storage/
lrwxrwxrwx  1 root    root     5 Mar 13 14:14 home -> /home
drwxr-x---  4 michael users 4096 Mar  9 00:33 music
drwxr-xr-x  5 michael users 4096 Mar  9 00:33 photos

The Solution

Samba won't let users follow a symbolic link if the link points to a place outside of (i.e. not under) the share defined in smb.conf. This kind of symbolic link is insecure because a user could set up a symbolic link to point to anywhere in the file system and attain access to it. Yep, that's pretty appalling if you have users who you don't know or trust. But this is my home network, so user level security is not a concern for me.

The smb.conf man page explains it like so:
Turning this parameter on when UNIX extensions are enabled will allow UNIX clients to create symbolic links on the share that can point to files or directories outside restricted path exported by the share definition. This can cause access to areas outside of the share. Due to this problem, this parameter will be automatically disabled (with a message in the log file) if the unix extensions option is on.
The solution didn't even require google, I should have just read the manual to start off with. Here's the amendments to the smb.conf global section:
        # default
        follow symlinks = yes
        # allow symlinks
        wide links = yes
        # Must be off for wide links
        unix extensions = no 
After restarting samba, no problem.

Samba Security

A couple of tips relating to security. I use passwords on my accounts and disable the root user. Also, I make sure that if someone does gain access to the home network (somehow getting our WPA PSK), then only the permitted (authenticated) users can mount the Samba share.

My eth1 is excluded from the bind interfaces because eth1 is attached to a cable modem, effectively connecting the machine directly to the rest of the world. I don't want the samba server to make itself available to the Internet.

Note: Setting up user passwords and authentication is not described below.

        invalid users = root
        interfaces = eth0 eth2 lo
        bind interfaces only = yes


comment = Storage
path = /storage
valid users = usera userb
guest ok = No
read only = No
browseable = Yes
available = Yes

Friday, August 12, 2016

VMware Player: NAT and DHCP - Customisation and Problem Troubleshooting

If you want to change the NAT network that VMware Player uses (by default it is then there are a few tasks to do. It's not straightforward and for VMware Player, some of the steps are not well documented.

NAT and DHCP should work out of the box with VMware Player, no configuration necessary. However, there may come a time that you need to change the network; there are a few good reasons and if you're reading this then you know at least one of them.

When you hunt around on the internet for answers on this, most of the responses are to just configure bridging. Actually in a corporate environment, bridging may not make your network admins happy, or actually fail, depending on how tightly controlled the network is. Bridging is just like adding another host onto the network. NAT hides the VM behind the VMware Host.

The following changes worked for me on Windows 7 with VMware Player 7.1.0. I'm not going into detail on the exact syntax of the config files because if you don't understand subnets and the concept of interface binding then this howto is not for you. 

Set Up the Interface

On your host cd to install dir (C:\Program Files (x86)\VMware\VMware Player) and run the following commands.

Alternatively, you can change adapter and services through the normal networking services control panels.

The instructions rely on vmnet8 being the adapter for NAT. By default, VM Player installs two adapters, vmnet1 for the "Host-only" interface type and vmnet8 for the "NAT" interface type.

On Windows 7 and 10 at least, the vnetlib commands have to be run with elevated privileges.
vnetlib.exe -- stop nat
vnetlib.exe -- stop dhcp
vnetlib.exe -- set vnet vmnet8 mask
vnetlib.exe -- set adapter vmnet8 addr
vnetlib.exe -- update dhcp vmnet8
vnetlib.exe -- update nat vmnet8
vnetlib.exe -- update adapter vmnet8
vnetlib.exe -- start dhcp
vnetlib.exe -- start nat
Note: These changes may lead to your config files (mentioned later) being re-written.

Configure DHCP and NAT

Note that even if you intend to only use NAT with no DHCP, you must update the DHCP settings to match the new interface subnet range. I found that if there is something incorrect in either the NAT or DHCP config file, the config files reverted to the defaults. I am not exactly sure what sequence lead to that that, but I do know that it was as a result of having mismatched subnets in the configs.

On VMware Player, there appears to be no GUI for configuring the DHCP or NAT services. VMware Workstation does seem to have a GUI.

Edit these files instead:


In the NAT file, don't mess with the vmnet1 settings unless you intend to change the Host-only interface type. I've actually disabled vmnat1 via the Windows Control Panel; you don't need vmnet1 in my experience.

Then restart DHCP and NAT via the method above, or go to the services panel in windows and do it there.

The MAC that your VM will connect to is found here:



Now you should be able to reach what we call in the industry "the internet", as long as you have routing and DNS set up properly on your Host and VM.

You can use Wireshark to capture on vmnet8 and check that packets from your VM to the Host are using the destination MAC specified in vmnetnat-mac.txt. Check that your VM has resolved the .2 address to this MAC and that this is the default route.

There is a hostMAC stanza in the vmnetnat.conf file that is actually telling the NAT service what the interface MAC of the NAT vmnet device is. That MAC is actually ignored by NAT. hostMAC seems to be there so that your Host to VM communication (ssh, https etc) is excluded from NAT. You will see this in the event log:
Using configuration file: C:\ProgramData\VMware\vmnetnat.conf.
IP address:
External IP address:
Device: vmnet8.
MAC address: 00:50:56:F1:77:9F.
Ignoring host MAC address: 00:50:56:C0:00:08.
The "MAC address" above is what your VM uses as the gateway:
# arp -an
? ( at 00:50:56:f1:77:9f [ether] on eth0
In other words, the IP .1 is for host to VM communication and .2 is for NAT traffic.  

VMNetDHCP Errors in Event Viewer 

No subnet declaration for VMnet8 (  Please write a subnet declaration for the network segment to which interface VMnet8 is attached.
The network interface vmnet8 is in a different subnet to what you've configured in the DHCP config file.
Address range to, netmask spans multiple subnets!
You've written something pretty odd in the DHCP config file.
Important! Errors can lead to your config files being re-written.

VMware Player 7.1.4 Upgrade

After an install VMware Player 7.1.4 I could no longer connect from my workstation to the machines in the VM network.

Note that something weird is up with this install, after the install process WMware Player insists that the version is 6.0.7 build-2844087, yet when I run the "Help -> Software Updates" process it declares that my version is up to date (7.1.4 is the latest).

The workstation's VMnet8 adapter was statically set by the vnetlib.exe command earlier. But it is also specified in vmnetdhcp.conf. Although the adapter had a static address, it appeared to be using a "" address, which generally indicates a failure to pick up a DHCP address. 

Switching the adapter setting back to DHCP immediately picked up the address defined in vmnetdhcp.conf, namely:
host VMnet8 {
    hardware ethernet 00:50:56:C0:00:08;
    option domain-name-servers;
    option domain-name "";
    option routers;
Things were working again after that.

Previous Upgrade Problems

On an upgrade to 7.1.3 build-3206955 I could SSH to my VMs, but they could not get out to the rest of the world, so it looked like a NAT problem.

The ARP table on the VM revealed that it was using the actual MAC of the VMnet8 interface for the .2 (NAT) address. The ARP for .2 should have had a virtual MAC for that virtual NAT IP. I call it virtual because the IP itself doesn't exist in the configuration of any interface on the Host (Windows in this case).

There were three problems. The first two were related to the upgrade and VMWare stomping over my custom config.

First, the DHCP stanza of the VMnet8 binding did not overlap the "Virtual ethernet segment 8" range. The subnet range had reverted to the default.

Second, the MAC of the NAT address had changed and may not have been represented correctly in vmnetnat-mac, I'm actually not 100% certain what happened here because I think that file is automatically updated after a VMWare NAT service restart, meaning that it's not actually configurable, just informational.

My corrected vmnetdhcp.conf:
# Virtual ethernet segment 8
# Added at 11/27/15 13:02:31
subnet netmask {
range; # default allows up to 125 VM's
option broadcast-address;
option domain-name-servers;
option domain-name "localdomain";
option netbios-name-servers;
option routers;
default-lease-time 1800;
max-lease-time 7200;
host VMnet8 {
hardware ethernet 00:50:56:C0:00:08;
option domain-name-servers;
option domain-name "";
option routers;
# End
The third problem was that I had statically set the IP address of "VMware Virtual Ethernet Adapter for VMnet" in Windows, which is a bad idea because it should be left at DHCP and then controlled via the vmnetdhcp.conf file. Make sure that the "hardware ethernet" MAC is correct (use "ipconfig /all" to see the MAC in Windows cmd prompt).

Something a bit odd. My Wireshark doesn't see the VMnet interfaces anymore. I can't capture on them. I don't know whether that's related to Wireshark, Windows or VMPlayer though.

Thursday, August 4, 2016

Wireshark's weird ESP dissection

I recently observed Wireshark telling me obviously false information about the contents of ESP payloads. While the fix to that was trivial, the information learned in the process was worth noting down.
Wireshark was parsing many ESP payloads in the pcap and trying to make sense of the data therein. The result was columns of nonsensical frames. Antique protocols interspersed with more recognisable ones.

ESP, like AH, encapsulates the data between hosts communicating over an IPsec connection. There is no way Wireshark could have known what the contents were because the Security Associations were established to use encryption. You can tell Wireshark the keys behind SPIs as long as the ciphers matched a supported set.

The reason that this was happening was due to this setting being enabled:
Edit -> Preferences -> Protocols -> ESP -> "Attempt to detect/decode NULL encrypted ESP payloads"
It's off by default so apparently I'd enabled this long ago and completely forgotten.

The "Personal configuration" config file behind user preferences can also be easily seen by going to:
Help -> About Wireshark -> Folders
Specifically this, for the setting under discussion this should be the default setting:
# This is done only if the Decoding is not SET or the packet does not belong to a SA. Assumes a 12 byte auth (HMAC-SHA1-96/HMAC-MD5-96/AES-XCBC-MAC-96) and attempts decode based on the ethertype 13 bytes from packet end
# TRUE or FALSE (case-insensitive)
#esp.enable_null_encryption_decode_heuristic: FALSE
This is what I had:
esp.enable_null_encryption_decode_heuristic: TRUE
One fascinating aspect is, as I infer from the comment in the prefs file, that if the packet capture doesn't have the ESP negotiation (IKE phase 2) then Wireshark assumes that the ESP is using NULL encryption. If the first bytes of the ESP payload then matches a protocol, then the invoked protocol dissector will valiantly pick through an ESP stream-of-consciousness and will (often) throw up its hands, declaring the payload of the triggered protocol as invalid. Sometimes, ESP just appears as ESP because the payload matches no protocol known to Wireshark.

I think this would work much better as a Wireshark right-click option in the "packet list" pane to 'Decode ESP stream as NULL encryption'. This would reduce pointlessly attempting to decode every ESP packet in a packet capture and as a result speed up load times when opening a pcap with this user preference in place.

With Wireshark, beautiful product that it is, you get what you're given!

Thursday, July 28, 2016

IPsec Between Openswan and Windows Using Certificates

In this post we configure Openswan and Windows 7 (or Vista) to bring up an IPsec + L2TP tunnel. One point of difference is that we focus on using certificates to facilitate secure IPsec connectivity. Put another way, we use RSA certs on phase1, rather than a pre-shared key. Configuring and maintaining this properly is not easy.

We will also look at how the Windows Vista and Windows 7 clients are configured with certificates (spoiler: it's the same way for both clients).

I'm a bit of an Openswan fan. I don't exactly know why, since strongSwan seems to be more feature rich. Maybe I defer to Openswan because it's easily installed through any Linux distro's package manager. Or perhaps because I started muddling around with it back in 2005. Anyway, a lot of what is written here will carry over to strongSwan.

Openswan seems to lack features that strongSwan offers; the ability to configure lifetime bytes (lifebytes) was one Openswan limitation I most recently bumped up against.

If there's anything in all this that isn't clear, then I'm happy to field comments or questions.

If you're wondering about the Oklahoma references later, it's because yesterday I watched the 1940 adaptation of The Grapes of Wrath.

Server Version

# cat /etc/debian_version




Simply take the example configuration and make amendments in a few places. Insert the contents of this file into the /etc/ipsec.config file. You can just cat the file appending it to ipsec.conf. For example:
cat l2tp-cert.conf >> /etc/ipsec.conf
The examples directory contains lots of handy files:
# ls -la /etc/ipsec.d/examples/
total 44
drwxr-xr-x  2 root root 4096 Mar  9 20:12 .
drwxr-xr-x 10 root root 4096 Mar  9 20:12 ..
-rw-r--r--  1 root root 1659 May 27  2012 hub-spoke.conf
-rw-r--r--  1 root root 1017 May 27  2012 ipv6.conf
-rw-r--r--  1 root root 1736 May 27  2012 l2tp-cert.conf
-rw-r--r--  1 root root 1825 May 27  2012 l2tp-psk.conf
-rw-r--r--  1 root root 1156 May 27  2012 linux-linux.conf
-rw-r--r--  1 root root 1580 May 27  2012 mast-l2tp-psk.conf
-rw-r--r--  1 root root  235 May 27  2012 oe-exclude-dns.conf
-rw-r--r--  1 root root 1694 May 27  2012 sysctl.conf
-rw-r--r--  1 root root  664 May 27  2012 xauth.conf

 The following example config has all the helpful comments snipped out for brevity:

# cat /etc/ipsec.conf
version 2.0     # conforms to second version of ipsec.conf specification
# basic configuration
config setup
        #plutodebug="control parsing"

conn l2tp-X.509
        # This is your server's IP:
        # You will create this certificate:

conn passthrough-for-non-l2tp
        # This is your server's IP:
        This is your server's gateway:
Make sure that nat_traversal is on.

You need to provide the IP address of the interface that the IPsec connections will arrive on. If your IPsec server is behind a NAT (as in the diagram and example config), then this will be your private network IP and not the public IP.

The leftnexthop is the nexthop address for your openswan server. Typically you can determine that as such: 
# route -n 
Destination   Gateway        Genmask        Flags Metric Ref Use Iface        UG    0      0   0   eth0  U     0      0   0   eth0

In the above example, the gateway for the default network is, which is the nexthop IP we are after.

If you put your server behind a NAT then you need to open UDP ports 500 and 4500 on your modem and pinhole those ports directly to the internal Openswan server IP (that you specified as "left"). This is a function frequently supported on even the most basic home routers.

Openswan - Special Notes 

In IPsec configurations, normally you want "tunnel" over transport, but this is one of the rare times that "transport" is necessary.
That's the same CA as the left (server) side. You specify this to indicate that only certificates signed by your IPsec CA are valid. You don't want anyone with a signed certificate from anywhere to get access!


HowTo documents often recommend setting up /etc/xl2tpd/l2tp-secrets but this did not seem to work for me. In the end I needed to configure the /etc/ppp/chap-secrets file.

# cat /etc/ppp/chap-secrets
# Secrets for authentication using CHAP
# client   server                   secret   IP addresses
# *        *                        password
ipsec-user password
The beauty of this is that all four settings are needed, nothing is wide open. The username "ipsec-user" and the password "password" are filled in on the Windows client in the initial connection dialogue box. The "IP addresses" is the server local range.
# cat /etc/xl2tpd/xl2tpd.conf
; Output trimmed, generally
[global]                 ; Global parameters:
 port = 1701             ; * Bind to port 1701
 access control = no     ; * Refuse conn without IP match
[lns default]            ; Our fallthrough LNS definition
 exclusive = yes         ; * Permit one tunnel per host
 ip range = ; Allocate
 hidden bit = no             ; * Use hidden AVP's?
 local ip =   ; * Our local IP to use
 length bit = yes            ; * Use length bit in payload?
 require chap = yes          ; * Require CHAP auth. by peer
 refuse pap = yes            ; * Refuse PAP authentication
 name =  ; * Use as hostname
 ppp debug = no                   ; * Turn on PPP debugging
 pppoptfile = /etc/ppp/options.l2tpd.lns  ; * ppp options
You need to take note of the "ip range", "local ip" and "name". In this example, the "ip range" comes from the same local IP network as the IPsec listening interface is on. This was selected as a matter of routing convenience.
# cat /etc/ppp/options.l2tpd.lns
logfd 2
logfile /var/log/xl2tpd.log
mtu 1400
mru 1400
lcp-echo-failure 12
lcp-echo-interval 5
If you have a local router then ms-dns is going to your local router if not then try whatever DNS address you normally use in your network.

Openssl - Config  

# ls -la /usr/lib/ssl/

lrwxrwxrwx  1 root root    20 Feb  1 23:16 openssl.cnf -> /etc/ssl/openssl.cnf

       countryName_default             = US
       stateOrProvinceName_default     = Oklahoma
       0.organizationName_default      = My Industries

Openssl - Server CA

Create a 10 year certificate authority (CA) specifically for IPsec.
# cd /etc/ipsec.d
# openssl req -x509 -newkey rsa:2048 -keyout private/caKey-ipsec-server.pem -out cacerts/caCert-ipsec-server.pem -days 3650

Openssl - Server Certificate

Now that we have a CA, we need to generate server certificates and have them signed by that CA. This sounds like two steps when one would do, but you may need to revoke your server certificate one day.
# openssl ca -in /etc/ipsec.d/private/ipsec-serverReq.pem -days 3650 -out /etc/ipsec.d/private/ipsec-serverCert.pem -notext -cert /etc/ipsec.d/cacerts/caCert-ipsec-server.pem -keyfile /etc/ipsec.d/private/caKey-ipsec-server.pem
# openssl rsa -in /etc/ipsec.d/private/ipsec-serverKey.pem -out ipsec-serverKey-rsa.pem

Openssl - Server CRL

Generate a Certificate Revocation List (CRL), which at this stage will have no revoked certificates. Later on, you will see how to easily revoke a signed certificate and add that certificate to this CRL.
# openssl ca -gencrl -keyfile /etc/ipsec.d/private/caKey-ipsec-server.pem -cert /etc/ipsec.d/cacerts/caCert-ipsec-server.pem -out /etc/ipsec.d/crls/ipsec-server.crl

Openssl - SSL Client Certificates 

The easiest method (in my opinion) is to issue all certificates from the Openswan server itself. I created a script ( to step through the certificate generation steps because it is unnecessarily painful.

When you run this script (, you'll be prompted for a series of passwords. One password will be for the CA's private key password, the other will be for the new private client key you are creating.

You should create a password for your client key. When you import the key into your Windows OS, you have to enter the password once, but other people will not be able to export and use the key elsewhere unless they know the password. In other words, there is little overhead in using a passworded key, but putting a password on the key adds an extra layer of difficulty for anyone trying to steal your private key from the local system.

Below, watch for my comments:
# New PK password
# CA password
# Enter
  • Where "New PK password" is for the new client key that you are generating.
  • Where "CA password" is the password for the CA private key"
  • Where "Enter" means do nothing, just hit the enter key.
Here we go:
# ./ testguy
Generating a 2048 bit RSA private key
writing new private key to '/etc/ipsec.d/clientcerts/testguy/testguyKey.pem'
Enter PEM pass phrase:               # New PK password
Verifying - Enter PEM pass phrase:   # New PK password
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [US]:
State or Province Name (full name) [Oklahoma]:
Locality Name (eg, city) []:
Organization Name (eg, company) [My Industries]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:testguy  # Recommend using the client hostname
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:       # Enter
An optional company name []:   # Enter
Using configuration from /usr/lib/ssl/openssl.cnf
Enter pass phrase for /etc/ipsec.d/private/caKey-ipsec-server.pem:   # CA password
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number: 13 (0xd)
Not Before: Mar 23 20:43:19 2014 GMT
Not After : Mar 20 20:43:19 2024 GMT
countryName   = US
stateOrProvinceName   = Oklahoma
organizationName  = My Industries
commonName= testguy
X509v3 extensions:
X509v3 Basic Constraints:
Netscape Comment:
OpenSSL Generated Certificate
X509v3 Subject Key Identifier:
X509v3 Authority Key Identifier:
Certificate is to be certified until Mar 20 20:43:19 2024 GMT (3650 days)
Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
Enter pass phrase for /etc/ipsec.d/clientcerts/testguy/testguyKey.pem:# New PK password
Enter Export Password:               # New PK password
Verifying - Enter Export Password:   # New PK password
Import /etc/ipsec.d/clientcerts/testguy/testguyKey.p12 to your Windows client.
total 24
drwxr-xr-x 2 root root 4096 Mar 23 21:42 .
drwxr-xr-x 3 root root 4096 Mar 23 21:42 ..
-rw-r--r-- 1 root root 1322 Mar 23 21:43 testguyCert.pem
-rw-r--r-- 1 root root 3526 Mar 23 21:43 testguyKey.p12
-rw-r--r-- 1 root root 1834 Mar 23 21:43 testguyKey.pem
-rw-r--r-- 1 root root  968 Mar 23 21:43 testguyReq.pem
I had written most of the content in the post before Heartbleed broke in April 2014. Until that point, I hadn't worried about digging into certificate revocation. In all reality, as a home user with an IPsec server running intermittently, I could count myself as particularly unlucky if someone had not only targeted my server for a Heartbleed attack but also managed to extract private keys or passwords; Noting also that my machine was patched within 24 hours of the announcement of Heartbleed.

Still, there will always be a need to be able to quickly revoke a certificate, for example if I thought that my windows machine had been compromised.

In that context, the script  (see end of post) needed to be enhanced to automate the revocation of any certificate signed by the IPsec CA. Note that the script cannot revoke a certificate signed by an external CA. In other words, if you generated a signed cert with this script then you can revoke the cert with this script.

Also note that if you sign a client certificate and subsequently generate a new CA certificate, you can still revoke the previously signed client certificate using the new CA certificate.

After revoking the client certificate and restarting IPsec, new connection attempts with the revoked cert will have this signature in the "ipsec barf" logs:
May  3 21:43:43 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: Main mode peer ID is ID_USR_ASN1_DN: 'C=US, ST=Oklahoma, O=My Industries, CN=roadwarrior'
May  3 21:43:43 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: certificate was revoked on May 03 19:42:27 UTC 2014
May  3 21:43:43 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: X.509 certificate rejected
May  3 21:43:43 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: no suitable connection for peer 'C=US, ST=Oklahoma, O=My Industries, CN=roadwarrior'
May  3 21:43:43 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: sending encrypted notification INVALID_ID_INFORMATION to 93.x.x.x:61421
May  3 21:43:48 ipsec-server pluto[6922]: "l2tp-X.509"[1] #1: Main mode peer ID is ID_USR_ASN1_DN: 'C=US, ST=Oklahoma, O=My Industries, CN=roadwarrior'
You can't revoke a certificate unless you have it to hand. I'd deleted a lot of certs during my testing and discovered that the index or "database" of issued certificates in /etc/ssl/demoCA/index.txt was populated with many certs that I no longer had. There appears to be no way to revoke a signed certificate by the serial number alone.

However, as we use the default openssl settings in the script  (see end of post) much of the time, the signed certificates are copied into /etc/ssl/demoCA/newcerts/ and named by serial number. As a result, we can revoke the serial number by finding the cert in that directory.
# ls -la /etc/ssl/demoCA/newcerts/
total 64
drwxr-xr-x 2 root root 4096 May  3 22:27 .
drwxr-xr-x 4 root root 4096 May  3 22:27 ..
-rw-r--r-- 1 root root 1744 Mar 17 20:32 01.pem
-rw-r--r-- 1 root root 1724 Mar 17 20:38 02.pem
-rw-r--r-- 1 root root 1724 Mar 17 20:49 03.pem
-rw-r--r-- 1 root root 1342 Mar 17 22:28 04.pem
-rw-r--r-- 1 root root 1342 Mar 17 22:29 05.pem
-rw-r--r-- 1 root root 1322 Mar 17 22:39 06.pem
-rw-r--r-- 1 root root 1342 Mar 17 23:38 07.pem
-rw-r--r-- 1 root root 1322 Mar 18 21:06 08.pem
-rw-r--r-- 1 root root 1367 Mar 18 23:09 09.pem
-rw-r--r-- 1 root root 1330 Mar 23 14:21 0A.pem
-rw-r--r-- 1 root root 1322 Mar 23 21:40 0B.pem
-rw-r--r-- 1 root root 1322 Mar 23 21:42 0C.pem-revoked
-rw-r--r-- 1 root root 1322 Mar 23 21:43 0D.pem
-rw-r--r-- 1 root root 1330 May  3 22:27 0E.pem

# cat /etc/ssl/demoCA/index.txt
V   240314193233Z   01  unknown /C=US/ST=Oklahoma/O=My Industries/
V   240314193805Z   02  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
V   240314194925Z   03  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
V   240314212827Z   04  unknown /C=US/ST=Oklahoma/O=My Industries/
V   240314212952Z   05  unknown /C=US/ST=Oklahoma/O=My Industries/
R   240314213921Z   140503193859Z   06  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
R   240314223758Z   140503194035Z   07  unknown /C=US/ST=Oklahoma/O=My Industries/
V   240315200605Z   08  unknown /C=US/ST=Oklahoma/O=My Industries/CN=hnzlwin7
R   240320132104Z   140503194227Z   0A  unknown /C=US/ST=Oklahoma/O=My Industries/CN=roadwarrior
V   240320204055Z   0B  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
R   240320204223Z   140503200045Z   0C  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
R   240320204319Z   140503152636Z   0D  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
V   240430202749Z   0E  unknown /C=US/ST=Oklahoma/O=My Industries/CN=roadwarrior
To verify the loading of the CRL, "ipsec barf" will give you something like this:
000 May 03 21:43:35 2014, revoked certs: 4
000issuer:  'C=US, ST=Oklahoma, O=My Industries,'
000distPts: 'file:///etc/ipsec.d/crls/ipsec-server.crl'
000updates:  this May 03 21:42:30 2014
000  next Jun 02 21:42:30 2014 ok
and this...
May  3 21:43:35 ipsec-server pluto[6922]: Changing to directory '/etc/ipsec.d/crls'
May  3 21:43:35 ipsec-server pluto[6922]:   loaded crl file 'ipsec-server.crl' (759 bytes)
May  3 21:43:35 ipsec-server pluto[6922]: loading certificate from /etc/ipsec.d/certs/ipsec-serverCert.pem
May  3 21:43:35 ipsec-server pluto[6922]:   loaded host cert file '/etc/ipsec.d/certs/ipsec-serverCert.pem' (1342 bytes)
The output of the script (when revoking) will look something like this:
# ./ /etc/ssl/demoCA/newcerts/02.pem
/etc/ssl/demoCA/newcerts/02.pem exists. Do you want to revoke this Certificate? [y/N]: y
Using configuration from /usr/lib/ssl/openssl.cnf
Enter pass phrase for /etc/ipsec.d/private/caKey-ipsec-server.pem:
Revoking Certificate 02.
Data Base Updated
Using configuration from /usr/lib/ssl/openssl.cnf
Enter pass phrase for /etc/ipsec.d/private/caKey-ipsec-server.pem:
Certificate Revocation List (CRL):
Version 2 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: /C=US/ST=Oklahoma/O=My Industries/
Last Update: May  3 22:21:19 2014 GMT
Next Update: Jun  2 22:21:19 2014 GMT
CRL extensions:
X509v3 CRL Number:
Revoked Certificates:
Serial Number: 02
Revocation Date: May  3 22:21:15 2014 GMT
Serial Number: 06
Revocation Date: May  3 19:38:59 2014 GMT
Serial Number: 07
Revocation Date: May  3 19:40:35 2014 GMT
Serial Number: 09
Revocation Date: May  3 19:46:03 2014 GMT
Serial Number: 0A
Revocation Date: May  3 19:42:27 2014 GMT
Serial Number: 0C
Revocation Date: May  3 20:00:45 2014 GMT
Serial Number: 0D
Revocation Date: May  3 15:26:36 2014 GMT
Serial Number: 0F
Revocation Date: May  3 22:00:11 2014 GMT
Serial Number: 10
Revocation Date: May  3 22:02:53 2014 GMT
Signature Algorithm: sha1WithRSAEncryption
Done, check for errors above. YOU MUST RESTART IPSEC!
-rw-r--r-- 1 root root 1724 Mar 17 20:38 /etc/ssl/demoCA/newcerts/02.pem-revoked
My advice is to make sure you have (R)evoked anything you aren't sure about. At the bare minumum you will have one (V)alid certificate for your server and one (V)alid certificate for an IPsec client.
# cat /etc/ssl/demoCA/index.txt
R   240314193233Z   140503223446Z   01  unknown /C=US/ST=Oklahoma/O=My Industries/
R   240314193805Z   140503222115Z   02  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
R   240314194925Z   140503223056Z   03  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
R   240314212827Z   140503223459Z   04  unknown /C=US/ST=Oklahoma/O=My Industries/
V   240314212952Z   05  unknown /C=US/ST=Oklahoma/O=My Industries/
R   240314213921Z   140503193859Z   06  unknown /C=US/ST=Oklahoma/O=My Industries/CN=toshiba
R   240314223758Z   140503194035Z   07  unknown /C=US/ST=Oklahoma/O=My Industries/
R   240315200605Z   140503223122Z   08  unknown /C=US/ST=Oklahoma/O=My Industries/CN=hnzlwin7
R   240315220911Z   140503194603Z   09  unknown /C=US/ST=Oklahoma/O=My Industries/CN=hnzlwin7/
R   240320132104Z   140503194227Z   0A  unknown /C=US/ST=Oklahoma/O=My Industries/CN=roadwarrior
R   240320204055Z   140503223138Z   0B  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
R   240320204223Z   140503200045Z   0C  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
R   240320204319Z   140503152636Z   0D  unknown /C=US/ST=Oklahoma/O=My Industries/CN=testguy
V   240430202749Z   0E  unknown /C=US/ST=Oklahoma/O=My Industries/CN=roadwarrior
R   240430215836Z   140503220011Z   0F  unknown /C=US/ST=Oklahoma/O=My Industries/CN=nogood
R   240430220142Z   140503220253Z   10  unknown /C=US/ST=Oklahoma/O=My Industries/CN=nogoodagain
With this script, it's easy to add and revoke a certificate, so there's no excuse to compromise security on the grounds of complexity.


  1. Using a Linux L2TP/IPsec VPN server with Windows Vista 
  2. Github:

Wednesday, July 20, 2016

HowTo: Configure Exim to scan zip attachments for malicious files

This post provides a quick way of configuring zip attachment scanning to Exim. I've drawn this example from a number of questions across the web and hopefully enhanced the experience by sharpening up the scripting and explaining in more detail what is happening. 

I'd intended today to cleave this information into the Making a Mailserver (Part 5) - Wonderful Spam part of my series on setting up a personal mailserver. However it seemed better to split this howto out as a separate entity to avoid confusion and lessen the difficulty of an already complex series.

We will only scan the contents of the zip file and look at file extensions. We won't extract files and scan the files in any way. Extracting and scanning for filetypes or passing through an anti-virus application is a better way of identifying malicious files. Scanning files is a CPU intensive activity and fraught with new complications such as whether your anti-virus application just introduced a new attack vector into your environment, I'm looking at you, Symantec.

The reality is that mass email Malware (not Spear Phishing for example) relies on the fact that people blindly double-click on the file they received, rather than following complicated manoeuvres to rename or open a file according to the attacker's in-message instructions. In that light, simply checking the extensions within a zip file is good enough for me.

Dropping emails with files that are not zipped

This is easily customised with Exim's ACLs.

deny message = Please don't email me attachments 
 demime = bat:btm:cmd:com:cpl:dll:exe:lnk:msi:pif:prf:reg:scr:vbs:url:wsf:docm:hta:jse
Mind out for the line wrapping above, that should just be two lines with "deny message ..." and "demime ..." as the start of each line. It should be obvious to you how to add or remove extensions from demime.

This is a hard drop. The sender is told to go away, rather than to try again later.

Each demime entry is the file extension of a file attachment. Notice that I have not put zip, tar and tgz in there. Zip files we check next.

Zip files are still a normal part of day to day email, we can't just drop them. Dmarc xml for example is sent as a zip attachment.

Dropping emails with zipfiles that contain malicious attachments

Again, we are in the check_data ACL.

deny message = Please don't email me attachments
     log_message = DENY: zip with blocked content
     demime      = zip
     condition   = ${run{/etc/exim4/ $message_id}{0}{1}}
Mind out for line wrapping, this is a four line stanza, the last line is the "condition" line.

Again, this is a hard drop. The sender is told to go away, rather than to try again later. The log_message is what you will see in the /var/log/exim4/rejectlog. "demime" looks for an attachment with the ".zip" extension and triggers the rule which fires the "condition".

The "condition" runs a shell script that must return either 0 or 1. "0" means no problem while "1" triggers the "deny".

The script itself you can download from my git location here. Verify that logger, grep, unzip and ls are installed and that the path to them has been correctly set in the script according to your system. You can easily add or remove file extensions in that script.

You might notice that I drop when a zip file is seen in a zip file. That's an overly cautious approach, but I see no genuine reason for that to occur. It certainly could occur, if someone zips you up a pile of files that also includes a zip archive. You might want to remove that from the script or even extract and scan the zip in the zip, it's easily done and it's your call.


I recommend initially to tail the /var/log/exim4/rejectlog file and watch for messages being dropped. Look at the "rejected after DATA" information, this is where the log_message output is appended.

Check daily the messages that are being dropped, I recommend a grep such as:
# grep -E 'DATA|X-Spam-Score|Subject:|To:' /var/log/exim4/rejectlog

If something goes wrong in your shell script, it will exit with a non zero value and trigger the Exim ACL deny. Be careful to check that the script runs cleanly.

Temporarily uncomment the "#$cmd_logger" messages and watch your syslog (normally /var/log/syslog) for the pass and fail messages. The logger messages give you more detail on what and why something was denied. If you have systemd, then the "service exim4 status" command will give you a recent history of messages that Exim wrote to syslog.


To date I've never received Malware delivered in ".tar" or ".tgz" files. When that day comes, I am going to write new deny rules to inspect the contents just as I do with ".zip". I'll probably blog about that here!

Wednesday, July 13, 2016

Making a Mailserver (Part 6) - Server Migration & Backup

This is an instalment in my series on setting up a Linux based mailserver. See these posts:

In this post, I cover the steps you must plan for when migrating your mail services from one server to another. Plan now for a migration before everything is a distant memory. I won't get into deep technical detail here because the technical detail is covered in the previous posts.

Careful Planning Eases the Transition
While first setting up your mailserver, make copious notes for speedier disaster recovery. After setting up a server of any kind, I like to dump the history command to file and save that off-box.

Something to ponder: could you recreate your server from the backups alone?


If your VPS provider isn't taking care of your backups (this often comes at an additional cost) then you really should backup frequently. If you're a home-brew error-prone control-freak like me, you'll want to custom script the backup process because the job is otherwise tedious and you'll never get around to it.

If you've followed my posts, then you need to backup the following directories completely:
  • /etc/
  • /home/
  • /var/www/
  • /usr/lib/cgi-bin/
  • /var/mail/
The active inbox files are in /var/mail while the user's mail folders will be stored in their /home path. Backing up those can take some time.

Leave no configuration file behind and grab /etc, this directory doesn't take a lot of space.  The /var/www and /usr/lib/cgi-bin/ dirs are just in case you were messing around with custom web pages despite my constant pleas that you don't.

You might also want to collect a list of all installed packages.

Over time I developed a script to take care of this and scp the backups to a remote server. You can easily adapt it for your needs. Grab my script from if you like.

Warning! If you do use my script, be aware that using ssh-copy-id is unwise unless you secure your private key with a password when creating it (ie. with ssh-genkey). The reason is that if you are using a cloud based VPS, the Administrators at that VPS provider can clone your server and access any file they please. If your VPS is installed on an encrypted volume, that's great, but a hacker who compromises your running VPS will also have access to your keyfiles.

Hence I strongly recommend you do one of two things:
  1. Put a password on your private ssh key when generating it.
  2. Run the script using a one-off key:
    • Create a passwordless one-off key to copy to the remote server.
    • Run the script.
    • Delete the local keys (in ~/.ssh/) and the public key from the remote server (in ~/.ssh/authorized_keys).
I've been considering scripting option 2.

Why Would I Need to Migrate?

In my experience, migrations are done only because your hand has been forced and the decision is out of your control. To name a few obvious reasons:
  1. You lost your nerve and decided to migrate the data from the clapped-out 486 in your shed to a VPS in the cloud.
  2. You found a better service for half the price.
  3. Your existing VPS provider declared that their system upgrade requires you to migrate to a new VPS. One VPS provider pulled this trick on me twice in two years and it's very annoying.
  4. Your Operating System went End of Software Support. Running a distribution upgrade is nerve-wracking to the point that I prefer to rebuild from scratch from a clean install.
  5. You prefer to operate as the root user and ran "rm -rf /".
  6. Your system has been compromised.

Migration Steps

Server migration involves, these steps in this order:
  1. Prepare the new server by installing all the packages required by your old server. 
  2. Configure the new server. You'll be recycling the config files from the old server to do that. You can recycle the FQDN and keep the same hostname, exercising some caution to not get the two confused.
  3. Make sure that the users from your old server /etc/passwd & /etc/group are inserted into the new server using the same uid and gid otherwise /home and /var/mail/ is going to be a mess. Don't forget to check /etc/aliases.
  4. Ensure the new server doesn't have the mail daemon (postfix/sendmail/exim) started. 
  5. Backup the old server.
  6. Take the old server mail daemon offline (turn off postfix/sendmail/exim).
  7. Update the MX record to point to the new server, but keep the new server's mail daemon offline.
  8. Copy all the mailboxes and user directories over to the new server. It's easier to tar and compress these (preserving file ownerships and permissions) and then copy, than to scp the files in place. You need to copy many things as root while still preserving the file ownerships and permissions.
  9. Start the mail daemon on the new server. 
  10. Fix all the problems.  
  11. Once you're really sure that the new server is working as expected, shut down the old one. Goto 10.
Here's why you want to do those steps in order:
  • You don't want to have two mailservers running at the same time.
  • The DNS changes take hours to propagate, so as long your old mailserver is offline, you might as well do the DNS change straight away.
  • You make the copy of mailboxes and user directories with the knowledge that they cannot change and the version you are copying is the most current.
It's okay for your mail domain to go dark for a few hours. The Internet doesn't really mind. Most mail hosts make a few attempts over several hours before giving you up for dead. Interestingly, I've noticed that Google / Gmail is very quick to notice your MX server change (less than 30 minutes).

Data Security in the Cloud

As I mentioned earlier, unless you are running from an encrypted volume, your data is open to the VPS company's admins. Either you encrypt your volume or you have no offline data security. You need to think about what will happen to your data when you retire your old server instance.

I know of no elegant solution to the problem of protecting the data on an abandoned VPS. The data security policy of a VPS company should inform your choice of provider.

If you're on a cloud VPS, delete all of the sensitive directories on the old server; likely the same directories that you would backup. It seems possible to me that your VPS provider could still recover and steal the data. There's also a chance that once the VPS provider assigns the disk area to another customer then that customer may be able to recover files.

You could delete the sensitive data and then fill the disk to capacity with junk. Redirecting the output of "yes" to file would be a quick way to do it. An ultra paranoid approach would be do that several times.

Wednesday, July 6, 2016

Making a Mailserver (Part 5) - Wonderful Spam

This is an instalment in my series on setting up a Linux based mailserver. See these posts:

In this post we setup Spamassassin with exim4 to stomp on as much spam as possible. We won't give anything that looks like spam a chance to be delivered, we'll dump such messages before they even complete the delivery process.

And yet it spams

Why Spamassassin?

There are alternatives, a good friend of mine recently recommended dspam to me, so that's on my list to investigate. Spamassassin doesn't catch everything, mostly because some spammers are pretty damn good at their job. It does not do a great job of spotting messages delivering Malware and the tediously regular emails from bearing manufacturers.

If there was such a thing as a perfect solution, it wouldn't be by implementing just one technology. Nothing's perfect. I chose Spamassassin because it is mature, well understood, backed by Apache and easy to setup.

This is Kinda Interesting

For a good number of years I went without receiving a lot of spam. I didn't intentionally publish my email address on the web and many websites take care to obscure your email address, for example, when publishing mailing list archives.

I was intrigued by Steve Gibson's assertion in Security Now #557 that it takes multiple years before your email address really gets on Spammers' radar. He changes his email address once a year:
And something as simple as changing your email address loses spam. That is, it's just gone. And you might think that, oh, it's going to find you again within a week or two. No. It takes, I can attest to this, years, multiple years.
I'd naively held the belief that Spammers found your domain name and then worked through a list of common names to mail to (michael@, john@, chris@, etc). Perhaps they do do that, but it seems that scraping websites and database dumps is the most common and less time-wasting way of building a list of recipients.

In 2015 the level of spam quickly started to get out of control for me. Malware especially was really flooding in and even though I am largely Linux and Android focussed, constantly deleting spam made checking my email a tedious, rather than fun, task.

Plus one for Thunderbird however. I took the time to train Thunderbird's junk mail handling and it is really good. However, as I check email on my phone (most of the time), Thunderbird's junk handling wasn't going to help unless I always had it running in the background on some workstation, somewhere.

The spam was pouring in and since I was redirecting (aliasing via /etc/aliases) some mailboxes to Gmail accounts, I was ending up with a large queue of frozen messages because Gmail was not happy to handle redirected Spam. If you want a Spam free mailbox, there's arguably nothing better than Gmail. I was worried about the potential damage I was doing to my mail server's reputation with Gmail by reflecting Spam straight to Gmail.

Enter Spamassassin.

Exim and Spamassassin Integration

I recommend you first review the debian exim wiki. I did find it to be wrong in places when I referred to it. There was some Exim documentation about acl actions that also informed my config. I'll be providing a multi mail-domain example because I do handle mail for multiple domains. The following example will work for one mail domain or many.

I use exim4 split config files because ... that's what everyone else does.

In brief, the debian exim wiki tells you to do the following:
#apt-get install spamassassin
If you are using Debian Jessie or later (with systemd enabled by default), enable and start the service using systemctl;
#systemctl enable spamassassin.service
On earlier Debian releases, edit /etc/default/spamassassin ...
...and then start the daemon.
#/etc/init.d/spamassassin start
At this point I found divergences between what the documentation tells you to do and what works in reality. The "add_header" did not work for me, following the wiki instructions. Here's how I set it up:

# warn
#   spam = Debian-exim:true
#   message = X-Spam_score: $spam_score\n\
#             X-Spam_score_int: $spam_score_int\n\
#             X-Spam_bar: $spam_bar\n\
#             X-Spam_report: $spam_report
# put headers in all messages (no matter if spam or not)
  warn  spam = Debian-exim:true
      add_header = X-Spam-Score: $spam_score ($spam_bar)
# add second subject line with *SPAM* marker when message
# is over threshold
  drop  spam = Debian-exim
#      add_header = Subject: ***SPAM (score:$spam_score)*** $h_Subject:
Important Points:
  • The debian docs "Subject" manipulation simply did not work for me. Refer to the "Rewriting Subject" section further below.
  • The debian docs used "nobody" as the user, I changed this to Debian-exim. Using "nobody" gets you all kinds of painful log messages.
  • The debian docs used "add_header = X-Spam-Report: $spam_report" on all  messages, this resulted in a message in the header of every email saying that the email had been detected as spam, regardless of the score. 
  • I do still insert the X-Spam_Score in every message.
  • I'm dropping anything over the Spamassassin threshold (required_score). The incoming message will be "rejected after DATA".
You will want to tinker with the threshold on dropped messages. 8 is too high, but it's better to start high and then inspect the score on the spam that makes it through. The bulk of spam appears to get very high scores, but between 4 to 5 there is a crossover between legitimate email and spam.

You also receive spam that gets scores as low as 1 and it's impossible to filter at that level without losing a lot of legitimate email.

You can tinker with the required score in /etc/spamassassin/ and the default at the time of writing is 5, which I think is about right.
required_score 8.0
It's important to remember that the delivery agent is going to get a hard fail when a message scores over the required_score. It probably won't come back for a second try. A slighted mailing list server, for example, may mark your address as a hard fail and remove your subscription.

The rewrite_header in the Spamassassin config is meaningless because Exim is handling the mail and just asking Spamassassin for its opinion on the spam score. Other elements in the Spamassassin file are relevant to scoring the message.

That's it! That's all you need to do.

Rewriting Subject

I didn't implement this because I elected to dump the high scoring messages and write (for every message) the X-Spam-Score to the headers.

Reviewing the Efficacy

You really must spend days or weeks checking in with your Exim logs in addition to reviewing the Spam messages that are slip through.
  • When looking at the spam that hits your inbox, take a look at the X-Spam-Score header that was written in by Spamassassin. View the message source to see the headers. 
  • Don't be confused by  fake headers added by the spammer, such as fake Spam Score information.
  • There is often false information in the headers about being checked by this or that antivirus software. 
  • Message headers should be read from bottom to top. Each mail agent prepends its headers to the top of the message as the message bounces around mailserver to mailserver.
Review the Exim reject log. Here's what you should see when things are working:
# tail -vf /var/log/exim4/rejectlog 
2016-07-06 18:41:41 1bKptM-0006PS-7x [] F=<> rejected after DATA
Envelope-from: <>
Envelope-to: <>
P Received: from ([])
        by with smtp (Exim 4.84_2)
        (envelope-from <>)
        id 1bKptM-0006PS-7x
        for; Wed, 06 Jul 2016 18:41:40 +0200
  Date: Wed, 06 Jul 2016 14:35:35 -0300
F From: "CamilleHot" <>
R Reply-To: "CamilleHot" <>
  X-Priority: 3 (Normal)
I Message-ID: <>
T To:
  Subject: Come here! I want to make love to you
  MIME-Version: 1.0
  Content-Type: multipart/alternative;
  X-Spam-Score: 18.3 (++++++++++++++++++)

Notice the X-Spam-Score of 18.3 - high but not off the charts. Let's review the kind of scores we've recently seen:
# grep X-Spam-Score /var/log/exim4/rejectlog
  X-Spam-Score: 7.5 (+++++++)
  X-Spam-Score: 7.2 (+++++++)
  X-Spam-Score: 11.8 (+++++++++++)
  X-Spam-Score: 8.2 (++++++++)
  X-Spam-Score: 20.0 (++++++++++++++++++++)
  X-Spam-Score: 14.2 (++++++++++++++)
  X-Spam-Score: 18.4 (++++++++++++++++++)
  X-Spam-Score: 15.4 (+++++++++++++++)
  X-Spam-Score: 5.1 (+++++)
  X-Spam-Score: 16.8 (++++++++++++++++)
  X-Spam-Score: 13.6 (+++++++++++++)
  X-Spam-Score: 6.5 (++++++)
  X-Spam-Score: 14.2 (++++++++++++++)
  X-Spam-Score: 9.5 (+++++++++)
  X-Spam-Score: 18.5 (++++++++++++++++++)
  X-Spam-Score: 18.3 (++++++++++++++++++)
  X-Spam-Score: 6.1 (++++++)
In the period where you're still finding the right score level, I would recommend going back and looking at the headers on the 5.1 message: use "less" to simply view the file and search for the score string.

In fact I discovered, as I wrote this, that the 5.1 in the example above was a legitimate email about a delivery I was waiting on. Oh dear, perhaps I'll nudge the required_score up to 5.1. In my experience, 5.5 is too high.

It's easy to review the scores on the messages in your mailboxes. Scores can actually be negative in number, the lowest I've noticed is -11.89:

Your current inbox:
# grep X-Spam-Score /var/mail/you
Other folders:
# grep X-Spam-Score /home/you/mail/Trash

Keep looking at the Exim logs. Be curious. Learn what the headers mean. Tinker. It's fascinating stuff.