mrb's blog


Proof-of-concept for a super-distributed CDN storing data in DNS records

Keywords: dns hack network performance web

I wrote a Chrome extension that uses DNS instead of HTTP to fetch web content. It implements the fake TLD .cdn53: when visiting the extension intercepts the request, sends a DNS query for the TXT record for "" and the response contains the HTML content, as simple as that.

It also works for URLs containing a path ( It works with relatively large resources (each TXT record can contain up to ~65 kB). It works for any content including binary data: JPEG, GIF, JavaScript, CSS, etc.

A system like CDN53 has quite a few advantages over HTTP:

Super-distributed, super-reliable. DNS is the most distributed content delivery network in the world. DNS records are cached by millions of worldwide DNS resolvers. Client machines are often configured with two or more resolvers for reliability. Heck, a home router running its own resolver can still hold and serve cached DNS records even when the Internet connection is down for a few seconds or minutes.

Handles more hits per second. A website hosted via CDN53 on a single physical server running the authoritative DNS server could easily handle millions of page hits per second. Assuming visitors come through 1000 unique resolvers (think ISP-level resolvers), assuming each of these resolvers handles 1000 qps [1], then this is already 1 million page hits per second. The kicker? Even with a low TTL of 20 seconds, these 1 million page hits per second are reduced to a paltry 50 qps to the authoritative DNS server (1000 resolvers contacting it once every 20 seconds).

Lower latency thanks to fewer network round trips. For a first-time hit to a host whose IP address is not known by the browser, a standard web request involves at least three network round trips:

  1. DNS query for "A" or "AAAA" record... DNS reply with IP address
  2. TCP SYN to port 80... TCP SYN/ACK
  3. TCP ACK followed by HTTP GET... HTTP reply with web content

Compare to CDN53 which is 3x faster since it only needs one round trip:

  1. DNS query for "TXT" record... DNS reply with web content

This one round trip is only true if the response (TXT data) fits in a UDP packet of 512 bytes, because resolvers usually force the DNS client to connect via TCP if the response is larger than 512 bytes (see section 4.3 of RFC 6891).

Lower latency thanks to network proximity. DNS resolvers are often located geographically close to web clients, much closer than the average web server. So even if DNS resolution has to take place over TCP (eg. if the TXT data reply would be over 512 bytes), a TCP handshake to a resolver will complete faster than a TCP handshake to an average web server. In an ideal scenario (home router running its own resolver), the client will be only ~0.1 ms away from the resolver, but an average web server will be farther away: tens of ms (traditional CDN), or hundreds of ms (web server halfway around the world).

Design of CDN53

The extension maps URL paths to DNS records by "underscore-encoding" them since underscore is the only non-alphanumeric character technically allowed in DNS labels:

URL DNS record

The TXT data is formatted as such:


Where version is always "1", mime-type the MIME type of the content, data is the binary content. The DNS protocol technically allows any binary content in TXT data, but because some DNS software is unable to handle 0x00 (NUL) bytes, I encode them using the 0x1b (ESC) byte: 0x1b is encoded to 0x1b 0x1b, and 0x00 is encoded to 0x1b 'n'.

When writing the CDN53 extension, I discovered that Chrome does not provide a DNS API to resolve TXT records. So I wrote a native application named txtresolver to do it, and the extension uses native messaging to communicate with txtresolver. txtresolver is written in Python and was tested on Linux only.

Also I discovered a bug in the PowerDNS resolver while developing CDN53: it supports any byte in TXT data except 0x7f which causes it to fail resolution ("STL error: Unable to parse DNS TXT '"\x7f"'"). I wrote a one-line patch to fix it.

Steps to experiment with CDN53

  1. Download cdn53.tar.gz and extract the archive.
  2. Register txtresolver as a native application for Chrome, run: $ ./txtresolver/ (this script merely creates the file ~/.config/google-chrome/NativeMessagingHosts/com.mrb.txtresolver.json)
  3. Close and re-open Chrome.
  4. Open chrome://extensions and drag and drop cdn53.crx into the page (the .crx is pre-packaged and included in the tarball)
  5. Browse the CDN53 resources I set up on my site:
  6. Optionally, create CDN53 records for your own site. They should be set up as is:
    $ dig +short -t txt
    "1:text/html:hello world..."

If you want to re-pack the extension .crx file from the source code, after you do it you need to update the extension ID in txtresolver/ and re-run $ ./txtresolver/ because the native application needs to declare which extensions are allowed to communicate with it.

If you run a network sniffer while accessing a .cdn53 site, you will notice that Chrome attempts to resolve the .cdn53 name (which fails), before running the chrome.webRequest handlers. This is normal because there is no way for extensions to prevent Chrome from resolving hostnames, even when blocked by a webRequest handler.

[1] 1000 queries per second is easily doable even for a low-end resolver. For comparison, at Google our public DNS handled 1.5 million qps on average as of March 2013. This number is even higher today.


Ahmed Kamal wrote: So what you've built is like a DNS tunnel tool except chrome integrated. Check out this project 05 Oct 2014 11:32 UTC

mrb wrote: Ahmed: my proof-of-concept and iodine serve 2 radically different purposes. They are not really comparable projects. 07 Oct 2014 02:09 UTC

imran wrote: No extension for Chrome on Windows? 24 Jul 2015 10:12 UTC

mrb wrote: Sorry, this was just a quick proof of concept. I didn't take the time to make something for Windows. 26 Jul 2015 16:28 UTC