Friday, May 28, 2021

How To Change Lutris Wine Version for Windows Games

I'm a bit of an Overwatch fan at the moment, but in the last month I have been plagued by silent game crashes. Users report that switching down to Wine 5.7-11 solves the problem, however with a performance loss.

There are guides on the internet to tell you how to switch versions but the descriptions were different a little to what I have in Ubuntu 20.04. Hence I  thought I'd write something slightly more up to date.

I have two Battle.Net installs since I have an Overwatch smurf account. I've blanked out personally specific info in the screenshots.

The Wine "runner" can be installed and configured via the specific Battle.Net app. Right-click the app and choose "Configure".


On the "Game Info" tab, click the "Install runners" button.

Scroll down to "Wine" and click the button with the weird spanner-person icon.

Scroll down and select the Wine version you want. In my case I wanted lutris-5.7-11. Do not unselect anything in the process.

An install process should run when you select Wine version.

With that changed hit "OK", then close the "Manage Runners" window, then click Save on the "Configure ..." window and then exit Lutris itself.

Open Lutris and right-click as before to configure the app. I had to exit Lutris before the new Wine version was offered in the screenshot below.

Now the Wine version you added is available on the "Runner options" tab of the "Configure ..." window.

Hit Save and you're ready to play!

Sunday, May 17, 2020

HackRF and ADS-B and COVID-19 in pictures

Given the current COVID-19 situation, I thought I'd take a look at what else was in the air. With not much to do these days, it was a good reason to pull out my ADS-B aerial and see about getting my HackRF One device to play with VirtualRadar.

Some pictures from VirtualRadar are in the second half of this post, the first half talks about setting things up.

In my blog post from 2016 I described my experience getting ADS-B going with an rtl device. If you're reading this, you probably already know what ADS-B is, but if for some reason you don't, you probably want to check my earlier post.


Setting Things Up


HackRF isn't supported by the dump1090 or the rlt_adsb applications for Linux, so I spent quite a bit of time trying to work out how to get some "1090" software to work with VirtualRadar. I tried gr-air-modes but it was a total disaster trying to "make" even under Debian 10 (Buster) due to a pyqt4 dependency and I almost gave up the whole idea.

In the end I found a fork of dump1090 that installed and worked with little drama. The configuration of VirtualRadar is slightly different from that in my rtl post.

Here's the dump1090 fork install steps - it wasn't quite as straightforward as the doc suggested.

This bit went fine:

$ sudo apt-get install librtlsdr0 librtlsdr-dev libhackrf-dev libairspy-dev libsoxr-dev
$ chmod 755 Downloads/SDRplay_RSP_API-Linux-2.13.1.run
$ ./Downloads/SDRplay_RSP_API-Linux-2.13.1.run

$ sudo ldconfig
$ cd personal/git/
$ git clone https://github.com/itemir/dump1090_sdrplus.git

This bit did not go so fine:

$ cd dump1090_sdrplus/
$ make
.../usr/bin/ld: dump1090.o: in function `readerThreadEntryPoint':
/home/xxx/personal/git/dump1090_sdrplus/dump1090.c:852: undefined reference to `rtlsdr_read_async'
/usr/bin/ld: dump1090.o: in function `main':
/home/xxx/personal/git/dump1090_sdrplus/dump1090.c:3172: undefined reference to `rtlsdr_close'
collect2: error: ld returned 1 exit status
make: *** [Makefile:12: dump1090] Fehler 1


This turns out to be a known issue requiring (on Debian) an edit of /usr/lib/x86_64-linux-gnu/pkgconfig/librtlsdr.pc :

prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include


Then I could build:

$ make
cc -g -o dump1090 dump1090.o anet.o  -lrtlsdr -lhackrf -lairspy -lsoxr -lpthread -lm -lmirsdrapi-rsp


Starting dump1090, it first looked for rtlsdr and then for a HackRF:

$ ./dump1090
No supported RTLSDR devices found.
Unable to initialize RSP
HackRF successfully initialized (AMP Enable: 0, LNA Gain: 32, VGA Gain: 48).


Interactive mode fairly quickly showed promising results:


$ ./dump1090 --interactive

...

Hex    Flight   Altitude  Speed   Lat       Lon       Track  Messages Seen  .
--------------------------------------------------------------------------------
3e6bd1 DKGAJ    3900      0       0.000     0.000     0     8         0 sec




Here I tell the application that I have a HackRF and that I want it to stream the results over the network, binding to port 8081 instead of the default 8080 for the web display because VirtualRadar will fail to start when it also tries to bind to port 8080.

$ ./dump1090 --dev-hackrf --net --net-http-port 8081
HackRF successfully initialized (AMP Enable: 0, LNA Gain: 32, VGA Gain: 48).
*8d471f59ea3e9858011c08f4aa9d;
CRC: f4aa9d (ok)
Single bit error fixed, bit 82
DF 17: ADS-B message.
  Capability     : 5 (Level 2+3+4 (DF0,4,5,11,20,21,24,code7 - is on airborne))
  ICAO Address   : 471f59
  Extended Squitter  Type: 29
  Extended Squitter  Sub : 2
  Extended Squitter  Name: Unknown
    Unrecognized ME type: 29 subtype: 2


This dump1090 version seems to have VirtualRadar directly in mind because the default ports it uses to stream data was what VirtualRadar expects by default.

$ netstat -pano | grep dump
tcp        0      0 0.0.0.0:8081            0.0.0.0:*               LISTEN      17413/./dump1090     aus (0.00/0/0)
tcp        0      0 0.0.0.0:30001           0.0.0.0:*               LISTEN      17413/./dump1090     aus (0.00/0/0)
tcp        0      0 0.0.0.0:30002           0.0.0.0:*               LISTEN      17413/./dump1090     aus (0.00/0/0)
tcp        0      0 0.0.0.0:30003           0.0.0.0:*               LISTEN      17413/./dump1090     aus (0.00/0/0)


VirtualRadar is configured according to the following screenshot, most of this is defaults:




Note that if you use google as the map provider then you need to setup API access and you probably don't want to do that, so use Leaflet.

And we're away.


There Must Be Something in the Air

Over a few hours I did see activity but the sky wasn't exactly alive wih commercial traffic. I should also add that I face west and it's quite possible that my position captures very little of the Munich airport traffic.



Overwhelmingly most of the planes I saw today were private. Obviously registered around near where I live (Munich, Germany) with the curious exception of this fellow out of the US:





Tel-Aviv.



It's hard to say how many airlines were running cargo, but this Egyptian one obviously was:


Someone in Munich was having a bad day, here's a rescue helicopter:



Qatar




Saudi



China



Libya - Not sure if the flight path is accurately recorded:






Sunday, December 30, 2018

Making a Mailserver - Spam Blocking, Revisited

In an earlier post I described implementing spamassassin with exim4. The information there still holds true, but the technique of simply implementing "spamd" has not been enough to hold back spammers who have my email address. My email address was harvested in both the LinkedIn and Last.fm hacks. In the last years the targeted spam has increased noticeably.

I started to train my desktop email client to pick out spam and it does a decent job, so I weathered the deluge for some time. However, plenty of spam still gets through and when my desktop email client is not open I have plenty of junk to pick through on my mobile devices.

Finally, I've taken the time to sharpen up my exim4 defenses.

Challenges


In the rest of this post, I'll be answering these questions:
  1. Does spamassassin support Domain Name System Blacklists (DNSBL)?
  2. How do I integrate blocklist (DNSBL) checks in exim4?
  3. How do I block hosts that are not really mailservers?
  4. How do I block on reverse DNS failures?
  5. How do I allow specific hosts to skip checking by exim4 and spamd?
  6. How do I verify these measures are working properly?
The answers to these questions are straightforward, but took quite a bit of research time and verification. Point 6 isn't separately addressed; each section that follows will talk about the ways that I verified the spam mitigations were working.

spamassassin and exim4


It turns out that spamassassin (spamd) supports DNSBL by default. I actually discovered this after going through the process of integrating zen.spamhaus.org checking in exim4. The difference is that you can kill spam outright with exim4 integration, but spamd will use it as part of the point calculation when determining how 'spammy' an email is.

There's a downside then to spamd: while the blocklist from spamhaus has a very high accuracy, being on the blocklist doesn't guarantee that spamd will calculate enough points to junk the email.

It's possible to change the point value of addresses that are on DNSBLs by adding a "score URIBL_BLACK <value>" line to the spamassassin config file. You also need to ensure that perl's Net::DNS is installed. To check if that is installed, try "perl -MNet::DNS -e 1" and the command should execute with no errors.

One unanswered question I have is whether the perl module takes care of the DNS lookup and server to use, or whether your server needs to have a a spamhaus friendly DNS server in /etc/resolv.conf - see the spamassassin DNSBL discussion below.

Verify spamassassin is using DNSBL


To verify whether DNSBL is being used by spamassassin, check the log for the presence of URIBL_BLACK. This could be the syslog logfile depending on the system, not the exim4 logs:

Dec 29 16:22:02 mail spamd[2739]: spamd: result: Y 12 - AXB_XMAILER_MIMEOLE_OL_024C2,BAYES_00,FORGED_MUA_OUTLOOK,FORGED_OUTLOOK_HTML,FORGED_OUTLOOK_TAGS,FREEMAIL_FROM,FROM_MISSP_EH_MATCH,FROM_MISSP_FREEMAIL,FROM_MISSP_MSFT,FROM_MISSP_REPLYTO,FROM_MISSP_XPRIO,FSL_CTYPE_WIN1251,FSL_NEW_HELO_USER,HTML_MESSAGE,LOTS_OF_MONEY,MIME_HTML_ONLY,MISSING_HEADERS,MISSING_MID,MONEY_FROM_MISSP,NSL_RCVD_HELO_USER,RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_WEB,REPLYTO_WITHOUT_TO_CC,SPF_SOFTFAIL,TO_NO_BRKTS_FROM_MSSP,TO_NO_BRKTS_MSFT,T_COMPENSATION,URIBL_BLACK scantime=1.3,size=4689,user=Debian-exim,uid=104,required_score=5.0,rhost=127.0.0.1,raddr=127.0.0.1,rport=41221,mid=(unknown),bayes=0.000005,autolearn=no autolearn_force=no 

This means that this particular message was found in a blocklist. Of course you are not going to see that present on every email that is checked.

Integrate the spamhaus blocklist into exim4


The instructions that I'll provide here are not specific to spamhaus, it's just the service I decided to try. It's free to a point, someone like me with relatively low volumes of email will be able to use the service unimpeded. There is a performance hit on your own server while doing the DNS lookup on spamhaus though. The speed of the lookup will slow your mail delivery of legitimate email by milliseconds.

It's supposedly possible to download a blocklist and do local lookups on that, but setting that up is more complex and requires frequent downloads of large lists, so it seems of small reward for a lot of work if you are not handling much of email.

My exim4 config is broken into separate config elements, which is a fairly normal thing to do, but you may find the files to place this config will differ depending on your system.

Enable "DNSBLS" as exim4 refers to it, in your custom macro file (/etc/exim4/conf.d/main/00-custom_macros for example):

CHECK_RCPT_IP_DNSBLS = zen.spamhaus.org

Configure the deny option or leave it at a warning level (/etc/exim4/conf.d/acl/30_exim4-config_check_rcpt).

# Check against classic DNS "black" lists (DNSBLs) which list
# sender IP addresses
.ifdef CHECK_RCPT_IP_DNSBLS
#warn
# message = X-Warning: $sender_host_address is listed at $dnslist_domain ($dnslist_value: $dnslist_text)
deny
  message = Failed sender validation
  log_message = michael DENY - $sender_host_address is listed at $dnslist_domain ($dnslist_value: $dnslist_text)
  dnslists = CHECK_RCPT_IP_DNSBLS
.endif

Notice above that I've commented out the default "warn" and "message". I've also added a custom log_message so the "michael DENY" sticks out. I know when I see that log that it's taking action on something I did. Also I dumbed down the message to not be too helpful to spammers, not that I think they're reading the SMTP rejection reasons!

When I initially implemented this, I never saw the rule being triggered. The reason was because the spamhaus lookup always failed to return any record. If your mailserver is configured to do lookups via a major DNS servers like 8.8.8.8, 1.1.1.1 or 9.9.9.9, the spamhaus lookups don't work. I'm not going to go into why they don't work here, but suffice to say that the major DNS providers don't want to know about these queries.

Unfortunately, that means finding a DNS server that will help you with your inquiries, or running a DNS server on the local box (or in your local network). If you refer back to my previous post on setting up a nameserver, then you can simply add the following snippets to the existing setup (/etc/bind/named.conf.options):


acl "trusted" {
        localhost;
        <other trusted IPs>;
};
options {
        ...

        allow-query { any; };
        allow-recursion { trusted; };
        allow-query-cache { trusted; };
        ...
}


I sincerely encourage you to check the bind documentation yourself. Don't go adding random config from the internet to highly sensitive services without understanding what each setting means. Just a little explanation, the acl restricts DNS queries for domains other than in the local zones to a list of permitted hosts, including localhost, so that exim4 and other local services can use this server to resolve IPs.

Naturally, you need to ensure that /etc/resolv.conf has "nameserver 127.0.0.1" or another DNSBL friendly server configured.

Back to the exim config. Without customising, i.e. using the default settings, you'll get a new header on the email message. The email won't be dropped by this rule,  but you will see in the email (or in the exim4 rejectlog if dropped elsewhere) an X-Warning header. Note it below:

Envelope-from: <ass3@binkmail.com>
Envelope-to: <michael@moff.tech>
P Received: from 79-103-16-190.fibertel.com.ar ([190.16.103.79])
by mail.moff.tech with esmtp (Exim 4.84_2)
(envelope-from <ass3@binkmail.com>)
id 1gdFOD-00008q-90
for michael@moff.tech; Sat, 29 Dec 2018 15:14:57 +0100
I Message-ID: <4EAF76134D97CA28C910752BF1AC4EAF@KSP94W150>
F From: "michael@moff.tech" <ass3@binkmail.com>
T To: <michael@moff.tech>
Subject: ***wonderful spam***
Date: 29 Dec 2018 07:01:51 -0400
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2900.5512
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512
X-Warning: 190.16.103.79 is listed at zen.spamhaus.org (127.0.0.11, 127.0.0.4: https://www.spamhaus.org/query/ip/190.16.103.79)
X-Spam-Score: 20.0 (++++++++++++++++++++)

You'll likely see the same warning in the exim4 mainlog:

2018-12-29 15:14:57 H=79-103-16-190.fibertel.com.ar [190.16.103.79] Warning: 190.16.103.79 is listed at zen.spamhaus.org (127.0.0.11, 127.0.0.4: https://www.spamhaus.org/query/ip/190.16.103.79)

Once you see this log or the custom deny log I suggested, you know that the lookup is working.

In the end, I ditched doing the spamhaus lookup when I realised that spamassassin was also doing it. I commented out the "CHECK_RCPT_IP_DNSBLS = zen.spamhaus.org" entry and left the 30_exim4-config_check_rcpt config just in case I wanted to switch it on again.

Blocking mailservers that aren't


Mail delivery is entirely dependent on DNS records. A mailserver without forward and reverse DNS entries is of doubtful reputation. It's a safe bet that any host sending you email that doesn't have complete DNS records is not a legitimate mailserver and should be ignored.

My default exim4 install does not hold mailservers or the sender addresses to strict standards. Organisations that deal with large volumes of emails, for large numbers of users, will receive legitimate email from such badly configured mail clients and mailservers.

I can say with a high degree of certainty that I should not be receiving email from weird hosts or senders with domains that don't exist or accept email. If I discard emails from such servers and senders, I might have a handful of people over some number of years that have a problem emailing me. The upsides to configuring my mailserver to be strict on these points outweigh the potential downsides.

There are three useful options. In the custom macros file (/etc/exim4/conf.d/main/00-custom_macros) you may elect to enable the following ACLs.
# Denied in /etc/exim4/conf.d/acl/30_exim4-config_check_rcpt
CHECK_RCPT_VERIFY_SENDER = yes
#
# Denied in /etc/exim4/conf.d/acl/40_exim4-config_check_data
CHECK_DATA_VERIFY_HEADER_SENDER = yes
#
# Denied in /etc/exim4/conf.d/acl/30_exim4-config_check_rcpt
CHECK_RCPT_REVERSE_DNS = yes

In the configuration files, you'll see that two of the methods are already deny, once the option is enabled (as above).

There's a helpful overview of many exim4 ACL options here and here. My descriptions below paraphrase them.

CHECK_RCPT_VERIFY_SENDER verifies that the sender of the message (RCPT TO) has a DNS entry. This is disabled by default, but when enabled will deny by default. Note that I added a custom log log message:

/etc/exim4/conf.d/acl/30_exim4-config_check_rcpt

# Deny unless the sender address can be verified.
#
# This is disabled by default so that DNSless systems don't break. If
# your system can do DNS lookups without delay or cost, you might want
# to enable this feature.
#
# This feature does not work in smarthost and satellite setups as
# with these setups all domains pass verification. See spec.txt chapter
# 39.31 with the added information that a smarthost/satellite setup
# routes all non-local e-mail to the smarthost.
.ifdef CHECK_RCPT_VERIFY_SENDER
deny
  message = Sender verification failed
  log_message = michael DENY - Sender verification failed
  !acl = acl_local_deny_exceptions
  !verify = sender
.endif

To date I have not seen this logged, so I can't verify that it's doing anything. It is possible that one of the other ACLs denies the email first.

CHECK_RCPT_REVERSE_DNS is the ACL that actually checks whether the mailserver has a reverse DNS entry.

/etc/exim4/conf.d/acl/30_exim4-config_check_rcpt

# Warn if the sender host does not have valid reverse DNS.
#
# If your system can do DNS lookups without delay or cost, you might want
# to enable this.
# If sender_host_address is defined, it's a remote call. If
# sender_host_name is not defined, then reverse lookup failed. Use
# this instead of !verify = reverse_host_lookup to catch deferrals
# as well as outright failures.
.ifdef CHECK_RCPT_REVERSE_DNS
#warn
# message = X-Host-Lookup-Failed: Reverse DNS lookup failed for $sender_host_address (${if eq{$host_lookup_failed}{1}{failed}{deferred}})
deny
  message = Sender validation failure
  log_message = michael DENY - Reverse DNS check failed
condition = ${if and{{def:sender_host_address}{!def:sender_host_name}}\
{yes}{no}}
.endif

Reverse check logs will now appear as so:

2018-12-30 07:17:37 H=([182.177.52.180]) [182.177.52.180] F=<ezambrano@maecabogados.com> rejected RCPT <michael@moff.tech>: michael DENY - Reverse DNS check failed


CHECK_DATA_VERIFY_HEADER_SENDER verifies that the sender is valid in at least one of the "Sender:", "Reply-To:", or "From:" header lines.

/etc/exim4/conf.d/acl/40_exim4-config_check_data


# require that there is a verifiable sender address in at least
# one of the "Sender:", "Reply-To:", or "From:" header lines.
.ifdef CHECK_DATA_VERIFY_HEADER_SENDER
deny
  message = No verifiable sender address in message headers
  log_message = michael DENY - No verifiable sender address in message headers
  !acl = acl_local_deny_exceptions
  !verify = header_sender
.endif


This condition is generally rare to see in the logs, but will look as so:

2018-12-30 09:00:31 1gdW1P-0003gC-Fx H=somelinuxhost.net (gentoo.somelinuxhost.net) [x.x.x.x] X=TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256 F=<noreply@somelinuxhost.net> rejected after DATA: michael DENY - No verifiable sender address in message headers: syntax error in 'From:' header when scanning for sender: malformed address: <noreply@somelinuxhost.net> may not follow noreply@somelinuxhost.net  in "noreply@somelinuxhost.net <noreply@somelinuxhost.net>"


After a day of monitoring logs I found that the only instance was against a message that I wanted to receive. A friend was sending me emails directly from a host that sent an automated daily digest. I decided to whitelist the domain:

# cat /etc/exim4/sender_local_deny_exceptions
somelinuxhost.net

Skip checking friendly hosts


My mailserver relays email for a couple of other servers I have on the internet. These host websites with contact forms that can change the "From" header to use the email address of the person who filled out the form.

In this case there will be various validation checks that fail.

018-12-29 17:51:59 no IP address found for host ip-x-x-x-x.eu-west-1.compute.internal (during SMTP connection from ec2-x-x-x-x.eu-west-1.compute.amazonaws.com (ip-x-x-x-x.eu-west-1.compute.internal) [x.x.x.x])
2018-12-29 17:51:59 H=ec2-x-x-x-x.eu-west-1.compute.amazonaws.com (ip-x-x-x-x.eu-west-1.compute.internal) [x.x.x.x] sender verify fail for <www-data@ip-x.x.x.x.eu-west-1.compute.internal>: Unrouteable address
2018-12-29 17:51:59 H=ec2-x-x-x-x.eu-west-1.compute.amazonaws.com (ip-x-x-x-x.eu-west-1.compute.internal) [x.x.x.x] X=TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128 F=<www-data@ip-x-x-x-x.eu-west-1.compute.internal> rejected RCPT <info-today@moff.tech>: michael-today UNKNOWN - Sender verification failed: Sender verify failed

You also need to give local server applications the ability to send email through exim4. For example, if you had a python script that generated an email at the of the script, you might see something like this in mainlog:

2018-12-30 23:44:56 1gdjpI-0005VU-Dg H=localhost (mail.moff.tech) [::1] F=<michael@moff.tech> rejected after DATA: michael DENY - No verifiable sender address in message headers: there is no valid sender in any header line

To add trusted hosts, including localhost, the solution is simple:

# cat /etc/exim4/host_local_deny_exceptions
127.0.0.1
x.x.x.x
y.y.y.y

In Conclusion


Don't waste your time integrating DNSBL into exim4 unless you do not want spamassassin to check for you.

So far so good. After a day of monitoring, not one piece of spam slipped through. I did discover that one email that I wanted was denied but due to the fact that the sender was someone firing email out from a machine not correctly setup to be a mailserver (I don't believe the sender cares enough to set that up).

This result was stunning when considering I could reduce spam not totally eliminate it, as appears to be the case after just 24 hours. One should expect to lose the odd email but careful review of the exim4 mainlog and rejectlog will help identify and whitelist desired 'special case' email senders.

Wednesday, January 17, 2018

How to get the SSL/TLS certificate chain right

After installing Firefox and Chrome on a new PC, I noticed that I was getting "issuer unknown" (Firefox: SEC_ERROR_UNKNOWN_ISSUER) errors on a website that I was checking connectivity against. The website was one that I was recently put in charge of and the organisation had a paid-for COMODO certificate for a Wordpress install.

On other computers there were no such errors reported by Firefox or Chrome for this website, so I initially missed the significance of the problem.

Background


Note: "Certificate" should be read as "public certificate". The private cert or key is not discussed in this document.
 
Websites that support HTTPS require a valid SSL/TLS certificate or the client will receive certificate warnings from the application or browser. Happily, your community organisation or personal website can get by with a free certificate courtesy of Let's Encrypt. At the same time, the website needs to provide a certificate chain, which essentially informs the client (your browser) about the identity of the host that signed your certificate.

Likewise, the certificate of the host that signed the previous host's certificate needs to be provided. This recursive method of providing the certificate of the previous "intermediary" signer continues until the root certificate authority (CA) is reached. The root CA certificate does not need to be provided because it should be specifically trusted on the client software or browser or the whole infrastructure of trust is useless. Browsers and other software will ship with root CA certificates, or you can manually add them if necessary.

Any intermediary CA needs to be included in the certificate chain, but the root CA should not be included.

Incomplete, Contains anchor


I turned to Qualys SSL Labs to see whether I could obtain a head start on the problem. I saw warnings, which told me where to look but didn't help in identifying exactly what was wrong.

"This server's certificate chain is incomplete. Grade capped to B."

And later in the report:

"Chain issues - Incomplete, Contains anchor"

 I had somewhere to investigate at least - the certificate chain.

Also worth noting was that curl also complained of certificate problems on my desktop even if Firefox and Chrome did not.

Examining the Certificate Chain


Note: example.com is a placeholder for the real URL I was investigating.
 
openssl is the obvious tool to turn to for seeing the nitty-gritty of an SSL/TLS session. I was able to easily view the certificate chain:

$ openssl s_client -connect www.example.com:443
...
Certificate chain
 0 s:/OU=Domain Control Validated/OU=Hosted by webgo GmbH/OU=PositiveSSL/CN=www.example.com
   i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA

 1 s:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
   i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root

 2 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority
   i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
To the untrained eye, the above output looks incomprehensible. But with a little understanding and research the problem can be clearly seen. The chain consists of three certificates (0,1,2) issued by the website. Each certificate has a (s)erver that the certificate belongs to and an (i)ssuer that signed the certificate for that server.

The first certificate (0) is for www.example.com and it was issued by COMODO RSA Domain Validation Secure Server CA

The second certificate (1) is expected to be for the COMODO issuer in the first certificate, but it is not. The certificate is for some other server AddTrust External CA Root, who is oddly also the issuer (a so called "self signed" certificate). Things are broken to bits from this point.

The third certificate (2) is completely superfluous because the second cert in this chain should not be there. In this third cert, we see it is for the server COMODO RSA Certification Authority and has been signed by the issuer AddTrust External CA Root.

Fixing the Mess


The solution was to provide a correct certificate chain. The first certificate (our certificate) was valid, but since it was signed by "COMODO RSA Domain Validation Secure Server CA" that needs to be the next public certificate found in the chain.

I first checked whether this was part of a typical Firefox CA set. The shipped CA certs can be viewed either on the Mozilla website or via the options->preferences of Firefox itself:



Yeah that's in German, sorry, but the English version will look basically the same.

In this case, notice that in the above screenshot "COMODO RSA Domain Validation Secure Server CA"  is actually in my certificate list in Firefox. I realised afterwards that at some point I had clicked through the invalid certificate warnings and added the certificate to my Firefox certificate store, to be trusted for next time. That's why I only noticed the problem after installing a new PC with Firefox and Chrome.

Just to repeat myself, the "COMODO RSA Domain Validation Secure Server CA" certificate is not part of the default suite of certificates trusted by Firefox. I needed to download the public certificate from COMODO here and tell apache2 (the site's web server) to use that certificate and only that certificate as part of the certificate chain.

Briefly, this meant configuring these apache2 settings ...

SSLCertificateKeyFile /etc/apache2/ssl.key/www.example.com.key
SSLCertificateFile /etc/apache2/ssl.crt/www.
example.com.crt
SSLCertificateChainFile /etc/apache2/ssl.ca/www.
example.com.ca

... ensuring that SSLCertificateKeyFile  contained only the private key of the server, SSLCertificateFile contained only the public certificate associated with the aforementioned private key and that SSLCertificateChainFile contained only the public certificate for "COMODO RSA Domain Validation Secure Server CA". If you have multiple layers of signing, you need to add each intermediary CA to this file, in the correct order. 

This works because the certificate for the intermediary "COMODO RSA Domain Validation Secure Server CA" is issued by "COMODO RSA Certification Authority" and that root CA is part of the shipped set of certificates for Firefox, Chrome, curl, openssl and any other SSL/TLS client you care to name.

The certificate chain now looks like this, (see earlier openssl command syntax):
Certificate chain

 0 s:/OU=Domain Control Validated/OU=Hosted by webgo GmbH/OU=PositiveSSL/CN=www.example.com
   i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA

 
 1 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA
   i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority

In the first certificate, the (i)ssuer name is the (s)erver name of the second certificate. The second validates the first. There is no need for a third certificate because the issuer of the second certificate is part of an established set of known root Certificate Authorities.

Thursday, August 3, 2017

How the IPv6 link-local address is determined

The IPv6 (ip6) link-local address is of significance to the link on which the ip6 network address exists. In other words, in an ethernet world, the link-local address only has meaning within a VLAN and is not routeable. In principle, the address could be routed, but it is forbidden under rfc3513.
Routers must not forward any packets with link-local source or destination addresses to other links.
In this post I'm only going to discuss the link-local unicast address, prefixed by FE80::/10. When I need clarity on link-local addresses I end up on Wikipedia and then at rfc3513. Hopefully you can get a high-level grasp via this post and for deeper detail consult those resources.

What's The Point?

Link local addresses are integral to the ip6 Neighbor Discovery Protocol (NDP) that allows local services and routers to be detected with zero input from the user.

The local machine can build up a table of services and routers on each link. On a linux machine you can quickly view what's out there:
$ ip -6 neighbor show
fe80::250:56ff:fe86:1234 dev eth1 lladdr 00:50:56:86:12:34 STALE
fc01:e5:1102::111:21 dev vlan12  FAILED
But getting into NDP and what this table tells you is not something I have time to do here.

Why Is There A Link-Local Address On My Interface?

Operating systems that have an ip6 stack enabled will assign themselves a link local address. They will ordinarily do that even when DHCPv6 exists on the network. The mechanism itself is known as stateless address autoconfiguration (SLAAC).

When determining an address to use, a linux system appears to conform to rfc2464 although rfc4862 appears to ease the precise definition.
The OUI of the Ethernet address (the first three octets) becomes the
company_id of the EUI-64 (the first three octets).  The fourth and
fifth octets of the EUI are set to the fixed value FFFE hexadecimal.
The last three octets of the Ethernet address become the last three
octets of the EUI-64. -- rfc2464 p. 3
The "Universal/Local" (U/L) bit of the original MAC address is also flipped.

For example:
vnet0     Link encap:Ethernet  HWaddr fe:54:00:d7:40:40
          inet6 addr: fe80::fc54:ff:fed7:4040/64 Scope:Link
The MAC address is fe:54:00:d7:40:40. First ff:fe is inserted in the 4th and 5th octets to become fe:54:ff:fe:40:40 and then the locally-administered bit is flipped (0xe minus 2 equals 0xc). The result is fc:54:ff:fe:40:40 and the full address with the link-local network prepended is
fe80::fc54:ff:fed7:4040/64.


Two things to clarify:
  1. If the locally-administered bit is set, then it is unset and vice versa.
  2. In ip6 addresses :: (double colon) is shorthand for zeros. The full address is actually fe80:000:000:000:fc54:ff:fed7:4040/64.
Windows is a lot different, according to this Microsoft technet post, which references this microsoft technet post, the address is assigned randomly since Vista. It's not referenced in the technet post, but rfc4941 describes two randomised identifier processes.

Was CVE-2016-1409 A Link-Local Bungle?

I have a suspicion (read "guessing"), that CVE-2016-1409 was a mixture of two failures in violation of the RFCs. Namely that:
  1. Cisco and other vendors would route Neighbor Advertisements across hops.
  2. In-path routers did not decrement hop-limit when (1) happens or the final recipient was accepting NA messages when hop-limit did not equal 255.

     

Monday, June 26, 2017

Switching Your Drupal to HTTPS

This post describes the steps I undertook to take to switch my Drupal site over to HTTPS. I'd made several attempts at this after getting a certificate via Let's Encrypt but ran into problems with mixed content, meaning that Firefox would not render the page properly while there was a mixture of encrypted and non-encrypted objects being loaded from the website.

I finally found the time to sit down and work it out. There is a nice explanation on the Drupal website and to a large extent I followed the 'best possible' solution there. My post adds some extra points you should know.

Let's Encrypt

I've covered the topic of Let's Encrypt in some detail in other places on this blog, so there's no need for me to cover this off. Get your certificate from Let's Encrypt and make sure that your webserver (in my case apache2) is using the certificate and responds to the https:// form of your Drupal website.

Go Full Encryption

As I mentioned above, the explanation on the Drupal website is a good place to start. I fiddled around with the Secure Login module and also with using $conf['https'] = TRUE; in the settings.php file.  In the end the "best possible security" option was the simplest and strongest solution.

Redirect Everything To HTTPS

Take the information from the Drupal website and implement the VirtualHost configuration to redirect all HTTP to HTTPS. Invoking the "Redirect" example on the VirtualHost, rather than the "Rewrite" example is easier and more elegant.

Mixed Content Problems

It's at this point that you may encounter mixed content problems, meaning that the Drupal site will not render correctly and have a yellow icon where the green padlock should be in the URL bar.

There are two things to do. The first thing is to check for objects that are actually loading over HTTP. If you're using Firefox, press F12 to load firebug and you can audit/search the source of the page for "http" objects. Do not confuse an http link to an external site (such as in an <a href..> tag) with content that is actually loading.

Mixed content means that your page has actually caused the browser to download an object from over HTTP instead of HTTPS. In my case I discovered a logo image present on every page had a hard-coded http:// address. Once I fixed that, I still had mixed content errors.

The next thing to change is the $base_url of your Drupal site, to indicate a default to https on all urls. This should have the effect of changing all the relative links that Drupal generates on the fly when rendering pages.

/etc/drupal/7/sites/moff.tech/settings.php:
$base_url = 'https://moff.tech';
In my case, these two steps got me the green padlock. It's possible that your install could have hard-coded HTTP object sources inside pages. All of that would need to be tidied up and depending on the size of your site a tedious exercise.

Good luck and please do share here any tips (war stories) from your experience!

Sunday, June 18, 2017

Drupal 7 and upgrading Media module to 2.x

For a few years now I've been hosting and maintaining a Drupal 7 installation for a non-profit club. I chose Drupal because I needed to provide a website that a non-technical person could easily publish content on.

I've written this post in the hope that it might just help someone and also to vent some frustration over the house-of-cards that is Drupal.

Why Drupal?

I did not want to run Wordpress because at the time, Wordpress had a sorry security and upgrade reputation. I'll note now that Wordpress has slowly rehabilitated that reputation, but still not to the point that I'd be willing to look after a Wordpress site myself.

Drupal is no walk in the park. In fact, I've been constantly annoyed by just how difficult maintaining a Drupal installation is. Layers upon layers of modules need to be installed before you have a dynamic and user-friendly website.

Drupal Drawbacks

Upgrading modules is generally easy, but still a chore in particular because I do not personally like the automated upgrade methods. For anyone maintaining many Drupal installations or a popular site where quick adoption of security patches is mandatory I am sure automated updates are a lifesaver.

I'll skip mentioning the scores of transient errors (Drupal love your system memory long time) and tedious frustrations (I cannot for the life of me work out how to get clean URLs working) that one encounters with Drupal in general, because I want to cover off some notes about the Media module and the version change to 2.x.

There was a major security problem discovered in the Media module version 1.x, which required an upgrade to 2.x. The upgrade instructions were so convoluted and the user problems post-upgrade were so frightful that I elected to wait it out and let other people suffer the bugs and pain. Meanwhile, I didn't need to take any action to implement the workaround to the security issue, I already had restricted access for untrusted users:
Prevent anonymous or untrusted users from accessing the media browser through permissions configuration -- 7.x-2.8 release notes
The Drupal cron job pestered me for months about needing to upgrade and it didn't feel good to ignore a necessary security update. I finally found the time to go back and review the upgrade process again and review the potential challenges. This exercise revealed an exemplar of how Drupal support and documentation can go woefully wrong.

Upgrade Documentation

The users and developers bravely struggled through the issues, attempting to document things within the framework of the Drupal website. Meaning that in order to upgrade from 1.x to 2.x and understand all the potential issues before upgrade, I needed to review these pages and hope that I'd discovered all the relevant documentation.
  1. Media
  2. media 7.x-2.8
  3. upgrading from 1.x to 2.x support
  4. Upgrading Media 7.x-1.x to 7.x-2.x
  5. Document Upgrade Path from Media 1.x to Media 2.x/3.x
  6. Comparison between Media 1.x and 2.x
  7. Critical database error after updating to this version
  8. Media + CKEditor + Media CKEditor recipe for setup
  9. File Entity (fieldable files) 
I was lucky in that I didn't have to mess around with the Views or Features modules or rework any fields (see link 4 above). This wasn't entirely luck, in that I've taken the approach with Drupal modules (and pages) to make customisations only when absolutely necessary; stemming from my abject fear of the nightmare scenario the Media upgrade represented.

How I Upgraded from 1.6 to 2.8

Well I hope this helps someone.
  1. Download the module tarball and extract it into the usual modules directory, overwriting the original module files.
  2. Download the File Entity module and extract it into the usual modules directory. File Entity is a new module to install because the functionality used to be a part of Media but has now been removed.
  3. Delete the file_entity directory in the Media module's directory. This is a directory resident from the older 1.x Media install. See link 6 above for more info.
At this point the site was generally working, but I was not able to run the database update mechanism (update.php) because of an issue I had not seen coming. My Drupal runs on an Ubuntu 14.04.5 LTS trusty install and the Database Update mechanism would not run unless the System version was 7.33. The Drupal version I had was 7.26.

Okay, they got me. Insert slow-clap here.

Since my Drupal install appeared to be operating okay, I considered waiting it out until a newer version of the drupal7 package was available and then running the database update. However, while I was doing some other maintenance on the Drupal install, the site came back with an error similar to the critical database error described in link 7 above and it was immediately obvious that the database update needed to be run in order to restore the site.

I considered doing a restore from backup. Eventually, I elected to add another dpkg source and install a newer Drupal version. I selected a Debian source for this although on reflection, taking the Ubuntu xenial source surely would have been the smarter option.

Before you do this yourself, read the rest of this post because I do not recommend mixing Ubuntu and Debian sources.

/etc/apt/sources.list:
deb http://ftp.debian.org/debian jessie-backports main
Then I ran a package update and installed drupal7 from this source. I got a couple of questions from the package installer about the database to use (as if it was a first install), but that was okay.
# apt-get -t jessie-backports install drupal7
I commented out the jessie-backports source and put a hold on the package, because at this stage I'm not clear what I'll do next.
# dpkg --set-selections drupal7 hold
The xenial source is Ubuntu 16.04 and so that is the next upgrade path for me, except that the xenial Drupal version is currently a lower version than the jessie-backports version I now have installed. To be clear, I believe that I should have used the xenial source. Quite a mess I made here, but I can figure this one out later.

Meanwhile, because I'd hosed Drupal earlier, I couldn't access the database update page. I had to put the site into "update_free_access" mode so that I could run the update engine without actually being logged in.

/etc/drupal/7/sites/moff.tech/settings.php:
$update_free_access = TRUE;
Then I could access the update.php page and run the database update, which completed successfully but with some vague messages but the site was up and running. If you do this, remember to set the "free access" back to FALSE after you're done.

Final Notes

The Media upgrade from 1.x to 2.x is a monster and it's likely that everyone who undertakes it will strike their own unique set of issues.

The Media module has been folded into Drupal 8 and so upgrading to 8 might skip this headache.

The File Entity module has not been folded into Drupal 8 so if you need that functionality then you need to install it as a module. Under Drupal 7, you must install File Entity with the upgrade of Media.

Check your Drupal version before you run the Media upgrade. You need System version 7.33 before you can run the database update.