OpenWRT IPTables download rate limiting en

By Gerco on Sunday 9 April 2017 05:09 - Comments are disabled
Category: -, Views: 1.878

As a parent to an 11 year old girl with an iPhone (no mobile data contract), I have a problem. The problem is that, when given the chance, she will sit and watch Youtube all day - every day. There are a few ways to solve that problem:
  1. Take the phone away
  2. Closely monitor the phone use and take it away after x minutes/hours
  3. Geek out
Being a geek, the only real option is of course the third one. Besides, option number 1 would result in a lot of drama and option number 2 is way too much work. My wife and I have better things to do than constantly monitor whether her x minutes/hours are already up and an 11 year old doesn't want nor should be under constant surveillance.

There are a few requirements to the solution that would work for us:
  1. Watching videos needs to be possible but limited. We have no problem with watching videos but all day every day is not OK.
  2. There should not be any need for manual action. If we have to take any manual actions there will be times that we forget and enforcement will be inconsistent, making it useless.
  3. Messages through WhatsApp, iMessage, Instagram, etc should not be blocked. We live in suburbian USA and there is very limited mobility for teens here (no sidewalks, no bike roads so they have to be driven everywhere). Messages and such are a very important tool to stay in touch with friends socially.
Rate limiting seems to fit all these options. Her iPhone gets some quota (say 500MB) per day and after that the connection slows down to 16kB/s. This will not allow videos but message and email will work fine.

I looked around for ways of doing this with my OpenWRT router (Turris Omnia) and ran into a few issues: Most of the rate limiting examples limit only upstream bandwidth and I want to mostly limit downstream instead. Some of the solutions used magic incantations for the SQM system (traffic shaping) that I found more complex than worth getting in to. I found the CONNMARK iptables target and it seems to fit the bill exactly:

First MARK all packets from my daughter's device:

iptables -t nat -A prerouting_rule -m mac --mac-source 1c:91:48:xx:xx:xx -j MARK --set-mark 0x0A -m comment --comment "iPhone SE"

If you want to include more devices in this rule, just add more MARK rules, you can add as many mac addresses as you like. When all desired packets are marked, we need to transfer this mark to the connection that they are a part of:

# If a packet is marked, make sure the connection is marked as well
iptables -t nat -A prerouting_rule -m mark --mark 0xA -j CONNMARK --save-mark
# On incoming packets, make sure to read the connection mark back into the packet
iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark

Because connection tracking is required for NAT anyway, this doesn't cause a lot of extra load on the router. All open connections from one of the marked devices will now be marked with the value 0xA, which we will use later to limit the download speed.

To do the actual rate limiting I used the hashlimit module. It can be found in opkg so it's easily installed on OpenWRT.

iptables  -A forwarding_rule -m mark --mark 0xA -m conntrack --ctstate ESTABLISHED,RELATED -m hashlimit --hashlimit-name "Over quota" --hashlimit-above 24kb/s -j DROP

Please note: The above rules are all OpenWRT specific since the chain names used in them are created by the OpenWRT firewall scripts. Substitute appropriate chains if using this method on some other Linux distribution.

Now that we have a rate limit in place, we need to be able to turn it on and off based on the amount of bandwidth consumed. There are ways of doing this with pure iptables but since those counters are not very reliable (they get wiped at every firewall reload), I decided to use Majordomo which was already included in my Turris Omnia router by default. It writes bandwidth statistics to /tmp in CSV format that we can easily use.

I very poorly cobbled together a Python script to check whether the quota has been exceeded and call it from cron every minute. This creates a marker file that I check for in /etc/firewall.user and creates the above iptables rules when the quota has been exceeded.

For now, this works well. I'm sure many improvements can be made and I'd like to get rid of the Python script and the cron job. I'm sure there are ways of limiting download speed using only iptables that I haven't found yet but it was a fun evening.

As a last note: If you have IPv6 connectivity at home, make sure to duplicate all these rules in ip6tables as well. Youtube works fine over IPv6

Netatalk 3.1.8 on Ubuntu 15.10 via PPA en

By Gerco on Thursday 4 February 2016 05:25 - Comments (7)
Category: -, Views: 3.870

(This post serves mostly as personal documentation and as a place to host relevant links)

Netatalk is a daemon that runs an Apple File Protocol (AFP) service on a Linux or other *nix machine that will allow Macs to use it as a file server and Time Machine backup target. This is great if you have a few Macs in your house and don't want to buy a big enough Time Capsule to back them all up. With Netatalk you can back them up to your Linux file server.

Ubuntu only provides Netatalk version 2.x in their repos and that version has some issues with Time Machine reliability. Because of that I was looking to install the latest version of Netatalk on my Ubuntu 15.10 server (to, hopefully, see the dreaded "Time Machine must create a new backup for you"-message a little less often). To my great surprise there were some PPAs for older Ubuntu versions but none for 15.10 that I could find.

I found some instructions to install from source, but I didn't want to do that. Installing from source requires installing a boatload of build dependencies that I don't want on my "production" server. Besides, on a system that is based on a package manager, installing from source should be avoided if at all possible. It's asking for trouble down the line.

I decided to try and create my own PPA for this Netatalk/Ubuntu combination and to limit the cruft I had to install on my server tried to create a Docker-based environment for building the package based on an existing PPA for an older version. Fast forward 2 days of digging into the depths of the Ubuntu/Debian build system, how PPAs work and a few hundred builds failed for whatever reasons and here is the result.


To install:

$ sudo apt-add-repository ppa:gercod/netatalk
$ sudo apt-get update
$ sudo apt-get install netatalk

See http://netatalk.sourcefor....1.8_on_Ubuntu_15.10_Wily for initial configuration advice. Make sure to enable the service at boot:

$ sudo systemctl enable netatalk

Anonymous user counting through DNS en

By Gerco on Monday 27 October 2014 22:00 - Comments (5)
Category: -, Views: 4.407

Imagine the following scenario:
You are maintaining a free application and you have a fair number of users, but you don’t know exactly how many. Because you want to understand your user base a bit better, you would like to count the number of users. This is the situation I found myself in recently.

My requirements for a system like this were as follows:
  • Must work through restrictive corporate proxies;
  • Collect the minimum amount of data possible;
  • Do not annoy users;
  • Little application size increase;
  • Count number of users as accurately as feasible.
I investigated a few options, most of those were unacceptable to me for various reasons. They would either bloat my application, collect too much data or cause issues with the user’s network configuration. A lot of my users are behind a corporate proxy and my application is implemented in Java. Those of you who are familiar with Java will know that HTTP requests will - more or less randomly - pop up a proxy authentication dialog. Since I didn’t want to annoy my users I needed another solution.

I came up with a solution that fulfills most of the above requirements:
  • Works through proxies because the application never makes a request to any server outside the corporate network;
  • Collects only a random unique id that gets created on application startup. No user data is included whatsoever, not even the user’s IP address.
  • Cannot pop up authentication dialogs or cause application delays due to firewalls restricting access to outside;
  • Does not require any new libraries since all the code is already built in to in every OS.
This is how it works:
  1. On application startup, the application creates a random uuid;
  2. The application then attempts to resolve $uuid.stats.domainname.tld through DNS
  3. The DNS zone file for domainname.tld specifies a custom DNS server for stats.domainname.tld.
  4. stats.domainname.tld is running a custom DNS server that logs queries to $uuid.stats.domainname.tld.
There you have it, just count the unique ids in the DNS server’s log file and you’re done. This even has the added bonus of working on machines that have no internet access for security reasons. Even though they cannot connect anywhere outside the corporate network, they mostly still can resolve DNS queries! Now on to the implementation, which is surprisingly simple:

In the client application, all you need to do is generate a unique id and resolve it. In Java, this can be done as follows:

public void getLatestVersionNumber() {
  Hashtable<String, String> env = new Hashtable<String, String>();  
  env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory");
  DirContext ctx = new InitialDirContext(env);
  Attributes attrs = ctx.getAttributes(uuid + ".stats.domainname.tld", new String[] { "TXT" }); 
  Attribute attr = attrs.get("TXT");
  return attr.get().toString();

In this case, I’m retrieving the TXT record, since that will allow my custom DNS server to be able return some information to the user, like the most recent version of the application (to alert the user an update is available, for example).

DNS configuration
The client code above causes the application to send a DNS request to resolve the domain name asked for: $uuid.stats.domainname.tld. In order to resolve this, the operating system will first need to resolve (in order): tld, dominate.tld and finally stats.domainname.tld. We can’t (or don’t want to) control which DNS server serves .tld or domainname.tld so we enter the following records to direct queries for *.stats.domainname.tld to our custom server:

; Set nameserver for stats.domainname.tld. to stats-ns.domainname.tld
stats     IN     NS     stats-ns

; Set IP address for stats-ns.domainname.tld. to
stats-ns  IN     A

Replace with the IP address of the machine that you will be running your custom DNS server on. You will need root-access to that server since a DNS server must run on port 53. A shared hosting server will not work.

DNS Server
On to the meat of the matter. Since I like experimenting with programming languages and my current language-du-jour is Go, I’m implementing this server in Go. I used the excellent DNS dns library from Miek Gieben. The server is based on his “reflect” example (with most of the code removed). The only interesting part is below:

func (h DNSHandler) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {
     m := new(dns.Msg)

     for _, q := range r.Question {
          // We only respond to TXT queries. Anything else does not exist on this DNS server
          if q.Qtype == dns.TypeTXT {
               var responseText string

               if strings.HasSuffix(q.Name, "."+h.config.DomainName) {
                    uuid := q.Name[:strings.Index(q.Name, ".")]
                    c := *newCheckIn(uuid, time.Now())

                    responseText = getLatestApplicationVersion()
               } else {
                    // The query is for config.DomainName, reply nothing

               if len(responseText) > 0 {
                    t := new(dns.TXT)
                    t.Hdr = dns.RR_Header{Name: q.Name, Rrtype: dns.TypeTXT, Class: dns.ClassINET, Ttl: 3600}
                    t.Txt = []string{responseText}
                    m.Answer = append(m.Answer, t)


This simplified code stores the unique ids received and replies with the latest application version.

That's all there is to it! Completely anonymous user counting without collecting any personal data, not even the user’s IP address. Naturally this only counts instances of the application and not the number of humans using it, but this is as close as you’re likely going to get.

Gratis universiteit! (Nu ja, bijna dan)

Door Gerco op zaterdag 1 september 2012 00:26 - Reacties (14)
Categorie: -, Views: 6.989

Ik zal wel de laatste persoon op aarde zijn die er vanaf weet, maar voor het geval dat niet zo is, hier gaat 'ie:

Ik heb laatst in mijn oneindige zoektocht naar kennis Coursera gevonden. Dit is een website die online universitaire vakken aanbiedt, die dan ook gedoceerd worden door docenten aan universiteiten als Stanford, Princeton en noem maar op (niet MIT, die hebben weer hun eigen platform).

De vakken die ze aanbieden zijn voor een groot gedeelte IT gerelateerd, maar zeker niet alles. Veel vakken (maar niet alle) bieden een certificaat aan wanneer je slaagt voor de huiswerkopdrachten en het examen. Dat kan betekenen dat je bijvoorbeeld minimaal 70% moet scoren voor je een certificaat krijgt.

Ik ben momenteel Cryptography aan het volgen die een paar dagen geleden gestart is. Als je er snel bij bent kun je nog makkelijk mee met de rest van de klas, het eerste huiswerk moet op 17 september ingeleverd zijn. Dit vak gaat grotendeels over symmetric ciphers, het opvolgende vak (Cryptography II, begint in Januari 2013) gaat dieper in op de materie en behandeld ook asymmetric ciphers in meer detail.

Mijn indrukken van de eerste week zijn positief. De stof is niet eenvoudig, maar goed te begrijpen en de opdrachten zijn uitdagend en interessant (de eerste programming assignment is het kraken van een OTP cipher).

Ik kan het van harte aanraden,
Veel leerplezier!

Van iCloud bug naar nieuwe iPhone

Door Gerco op zaterdag 4 augustus 2012 01:50 - Reacties (2)
Categorie: -, Views: 4.968

TL;DR versie: Voor niets nieuwe iPhone gekregen door totaal ongerelateerd probleem, yay!

Ik woon in de VS, we hebben hier AT&T en Apple stores met genius bars.

Een paar dagen terug kreeg ik een iPhone 4s op de werkbank die al maanden lang batterijproblemen had. Hij was binnen een uur of 4-6 leeg en sinds een paar dagen verbruikte het ding ook ongelofelijke hoeveelheden data (400MB/dag) terwijl de gebruiker hem gewoon op de tafel had liggen en niet aanraakte. Dit dataverbruik viel pas op toen de gebruiker thuis tijdelijk geen Wifi meer had door een verhuizing.

Na AT&T gebeld te hebben en hun instructies te hebben opgevolgd over het uitzetten van diagnostic reports, ophalen van email, afsluiten van background apps en het opnieuw aanmaken van iCloud en email accounts hebben we het ding eens een dag laten liggen en helemaal niets mee gedaan. Gevolg: ruwweg 300MB data verstookt, dat was het dus niet.

Volgende stap: We hangen het ding aan een mac met XCode en gaan eens naar de Console kijken. Daar staan wat interessante entries in:

Aug  2 14:53:20 unknown dataaccessd[52] <Notice>: xxxxxx|CardDAV|Error|Yikes: failedToFinishInitialSync:error: Error Domain=CoreDAVErrorDomain Code=7 "The operation couldn&#8217;-t -b-e -c-o-m-p-l-e-t-e-d-. -(-C-o-r-e-D-A-V-E-r-r-o-r-D-o-m-a-i-n -e-r-r-o-r -7-.-)-"
Aug  2 14:53:20 unknown wifid[28] <Error>: WiFi:[xxxxxxxxx.xxxxxxx]: Client dataaccessd set type to background application
Aug  2 14:53:22 unknown dataaccessd[52] <Notice>: xxxxxx|CardDAV|Error|Yikes: failedToFinishInitialSync:error: Error Domain=CoreDAVErrorDomain Code=7 "The operation couldn&#8217;-t -b-e -c-o-m-p-l-e-t-e-d-. -(-C-o-r-e-D-A-V-E-r-r-o-r-D-o-m-a-i-n -e-r-r-o-r -7-.-)-"
Aug  2 14:53:22 unknown wifid[28] <Error>: WiFi:[xxxxxxxxx.xxxxxxx]: Client dataaccessd set type to background application

Deze logregels herhaalden zich elke paar seconden. Er gaat overduidelijk iets mis met de iCloud sync, maar wat? iCloud uitzetten hielp, maar is natuurlijk niet echt een oplossing. iCloud account opnieuw instellen hielp niet en zelfs de telefoon compleet restoren had geen enkele zin. Op naar de genius bar!

Aangekomen bij de dichtsbijzijnde Apple store vragen we de lokale genius om naar het probleem te kijken. De genius hoort me even aan en controleert het serienummer van de iPhone. Zonder te aarzelen (en zonder naar het probleem te kijken) pakt de beste man een splinternieuwe iPhone 4s en begint de omruilprocedure. Bij navraag weet hij ook niet wat het probleem is maar hij is er zeker van dat een nieuwe telefoon het wel zal oplossen, wie ben ik om daar tegenin te gaan?

Een nieuwe iPhone later stellen we de iCloud account van de gebruiker in en alles werkt prima! Helaas blijkt na verdere inspectie dat maar enkele tientallen van de 386 contacten op de telefoon zijn aangekomen. Iets verder kijkend zien we dat de ~340 ontbrekende contacten ook niet op iCloud staan. De genius zet de contacten op de old-skool manier over vanaf de oude telefoon en we wachten even tot de telefoon alles gesynced heeft.

Tot onze grote verbazing synchroniseert de nieuwe iPhone slechts 20 contacten met iCloud en vervalt daarna direct in de lus uit de bovenstaande logs. Er is dus iets speciaals met de contacten aan de hand waar iCloud zich in verslikt. Dit zorgt voor een oneindige synchronisatie lus die de batterij leegzuigt en massaal data verbruikt!

Wat blijkt nu: De contacten op deze telefoon zijn ooit eens (door diezelfde Apple store) vanaf een Windows Mobile (HTC Diamond) telefoon naar de iPhone gekopieerd en er stonden een berg notities met Russische tekst bij. Die notities zijn tijdens dat proces corrupt geraakt en iCloud verslikt zich daar blijkbaar nogal in. Na het verwijderen van alle corrupte notities synchroniseert de telefoon prima en werkt alles naar behoren, met dien verstande dat de gebruiker wel met een nieuwe iPhone de winkel uitloopt terwijl er met de oorspronkelijke telefoon helemaal niets mis was.

Wie we moeten bedanken voor de nieuwe telefoon is niet helemaal duidelijk, maar bedankt Apple/Microsoft/HTC!