mrb's blog

Release of a HA blogging platform & finally a new design for this blog

Keywords: blog web high-availability reliability redundancy distributed decentralized

High availability (HA), distributed, lightweight, static site yet with comments, with a responsive UI. These are a few characteristics of the ideal blogging platform I have always desired. So I built it. Warning: this is proof-of-concept code only.

I exported data from my old PivotX instance. I wrote server-side code to handle blog comments and distribute them across multiple servers. I designed a responsive UI. And after many hours of work I finally switched to—wait I need a name for my project… hablog!

Visual design

First, let’s talk about design & vertical space (click to enlarge):

Comparison of vertical space usage on various websites

I took screenshots of 8 mobile sites on an Android phone with a 720×1280 display running Chrome.

The leftmost screenshot shows that content on my blog typically starts at 350–400 pixels from the top of the screen, whereas most sites start at 600–900 pixels, or in extreme cases they use more than the entire screen to display ads and zero content *cough*BBC*cough*. I dislike waste of vertical space and I think my design gives readers a chance to engage with more of the post before scrolling down and before waiting for the whole page to load.

Notice also that, on a small screen, images on my blog can be as wide as the full width of the screen. Why waste margin space?

Responsive layout

On larger screens the responsive UI transitions to 2 columns:

Desktop layout

(Compare this to the previous design of my blog.)

The new design allows the post’s first sentence to start right at the top of the page, maximizing (again) the amount of content shown to readers while keeping a reasonable line length and without scrolling down.

Another thing. I am lucky to have relatively high-quality comments on my blog. So instead of relegating them to the bottom of the page, the 2-column layout lets me showcase them by tucking them alongside the post—like comments in a Google Doc or MS Word document. The comment submission form is also right there at the top, to entice readers to leave comments without hunting for a form at the bottom of the page.

Finally, I needed a mechanism to emphasize my own comments. So I came up with the idea of this vertical orange line between the 2 columns that swerves around my replies. Doing so groups them with the post, which is perfectly logical because they share authorship—me.

No sign-in

My blog requires no sign-in in order to reduce friction when submitting a comment.


Many modern web designs adopt a sans-serif font for titles and headers, and a serif one for the main body. I like that.

For titles & headers I picked Raleway. Notice its elegant “fi” ligature in “finally” in this post’s title.

For the main body I picked Noto Serif which, by the way, is the default serif font for Chrome on Android. It has great Unicode coverage so lines of text tend to keep the same line-height even when containing various Unicode characters. I was annoyed at how many other popular fonts do not provide for example a glyph for U+2126 OHM SIGN (Ω) which I use here. If my custom font did not provide this glyph, then text rendering would fall back to the browser’s default serif font, and if it has a taller line-height than my custom font, then the line containing this glyph is taller than other lines which is visually unappealing.


Low-contrast sites suck. And black text on white background hurts the eyes. So I chose black text on very light grey background (#f0f0f0).

As to the color theme, it is grey & orange. Maybe not the best? I am open to suggestions.


The visual design is the only thing visible to my readers. But what about the technical guts of


Six years ago I described the architecture I wanted:

“I will soon have 2 servers colocated in 2 datacenters on 2 different continents, with having 2 A records for these 2 servers. Browsers try to connect to the 2nd if the 1st fails; and with DNS pinning they tend to stick with the one that works for the remaining of the browsing session. Doing it this way is a cheap way of providing HA for a website.”

Today the cost of VPS and dedicated ARM servers is so low that I decided to run my site on 3 servers, from 3 different providers, on 3 continents. This is why resolves to 3 IP addresses:

  1. Digital Ocean in the US ($5/month VPS)
  2. Scaleway in Europe (3€/month dedicated ARM server)
  3. Vultr in Asia ($5/month VPS)

On the software side, I put all my posts in a local Mercurial repository, and use static site generator Jekyll to generate the site locally. The pages look complete except they are, well, they are not dynamic but static. They miss blog comments. I place this tag in the page at the location where I would like comments to be inserted:


Remember this tag for now. I will come back to it later.

After generating the site locally I run a bash script to rsync the files to my 3 servers, except with a twist…

The static content (image assets, home page index.html) are rsync'd to the web server’s document root /foobar/html:


However the dynamic content (post pages that will contain the reader comments but for now only have the <!--hablog-insert-comments--> tag) are rsync'd to a different directory /foobar/db:


Keep in mind this is all done in parallel on 3 different servers.

Now hablog (high availability blog) comes into play. It is made of 3 components: watch-db, hablog.fcgi, and sync-daemon (total ~400 lines of Python code and ~50 lines of bash).


Each server runs a daemon watch-db that uses inotify to watch the content of /foobar/db and whenever files are rsync'd there, they are processed and copied to the web server’s document root /foobar/html. The processing step replaces the <!--hablog-insert-comments--> tag mentioned earlier with the actual comments.


Where are the actual comments fetched from? When a comment is submitted to via a POST /hablog request, a simple FastCGI server hablog.fcgi handles the request, verifies the Google reCAPTCHA, and writes the comment as a JSON file under /foobar/db/<post-id>/:

  "user": "john",
  "comment": "Yes I was aware...",
  "ip": "",
  "user-agent": "Mozilla/5.0...",
  "removed": 0

(I will explain “removed” in a moment.)

And because watch-db watches /foobar/db, it notices both new posts (/foobar/db/<post-id>/index.html) and new comments (/foobar/db/<post-id>/...), and will be able to replace the <!--hablog-insert-comments--> tag with the comments in order to regenerate the final HTML file under /foobar/html.


How are the comment files synchronized between my 3 servers? This is the role of sync-daemon: a simple cronjob which runs every few minutes on each server and rsync only the comment files to/from the other 2 servers. If any 1 of the 3 servers goes down, the remaining 2 online servers still synchronize comment files between each other. When the offline server comes back online, whichever one of the 2 other servers runs the cronjob first will resync all the comments to the resuscitated server.

This is the crux of how hablog implements high availability: the 3 servers form a distributed redundant architecture and are independent from each other.

Note that sync-daemon does not use the rsync --delete option. Comment files are never modified, never deleted, only created once. As a result synchronization conflicts are impossible by design (KISS).

Comment deletion

Occasionally a comment does need to be deleted, such as a spam that circumvented reCAPTCHA. But wait, I said comment files are never modified and never deleted…

Here is another crucial design aspect of hablog. This one makes comment modification/deletion possible.

hablog names the comment files according to the convention <timestamp-since-epoch>.<comment-ID>, eg. 1470000000.633ce99f46a21520b67a3022469241fa. When watch-db processes a post, it sorts all comments according to timestamps (that is how they end up in chronological order in the final HTML). However watch-db also lets a more recent comment file overwrite the JSON attributes if the more recent file has the same comment ID.

For example if a comment is saved as 1470000000.633ce99f46a21520b67a3022469241fa, but a file named 1470000001.633ce99f46a21520b67a3022469241fa exists (notice the newer timestamp) and contains:

{ "removed": 1 }

Then it overwrites the removed JSON attribute from 0 to 1, and the code considers it deleted. Any of the other JSON attributes (user, comment, etc) can be overwritten by a newer comment file. For example comment could be overwritten to edit the text content.

Deterministic comment IDs

Comment IDs are generated server-side by hashing together the post ID + the post’s last comment ID (if you inspect the HTML, this is the seed form input value) + the username + the content of the comment. Therefore if a browser submits a comment to 2 or more servers (eg. due to network glitches causing the browser to retry the POST request against 2 or more of the IP addresses of, the servers will each generate the same comment ID and each save the file in /foobar/db, which will not cause a data discrepancy. At worst this would result in 2 files with a possibly different <timestamp-since-epoch> in the filename, but containing the same content which is harmless (per the logic of newer JSON data overwriting older JSON data).


Download hablog here.

Warning: proof-of-concept code only. hablog is probably not for you. Storing one comment per file and using rsync to synchronize comments only scales up to a point, maybe up to 100k files. My blog has only 1000 files as of today (80 posts + ~1000 comments).


hablog gives me many advantages for a high-traffic few-comments site.

Lightweight high performance static site. All 3 servers combined can handle up to ~2500 page hits/sec of my largest text-only posts (50kB), or 350 Mbit/s of traffic according to my benchmarks. The bottleneck is not CPU or IOPS, but network bandwith available to my servers. This level of performance is definitely much more than I need considering that my heaviest slashdotting—when I published this—was 40 page hits/sec sustained for a few hours. At the time the PivotX instance could not keep up with the traffic because PHP handling was too CPU-intensive, so I am relieved to move to a static site that can handle 100× more page hits/sec :)

Highly available redundant architecture with no single point of failure. It would take 3 different outages at 3 hosters on 3 continents at the same time to take down the site. In fact even if the servers are available only 98% of the time—7 days of downtime per year!—hablog is expected to still provide five nines availability (as long as downtime amongst the 3 servers is random and uncorrelated): 1 - (1 - 0.98)³ = 99.9992%

CLI tools and revision control. My posts are text files, edited locally with my favorite editor and placed under revision control. I can run custom CLI scripts on my servers to bulk delete the occasional spam comments. I prefer doing it this way rather than using a constraining point-and-click web UI.


ʞuɐɹɟ wrote: I like the new design! 16 Jul 2016 16:52 UTC

nanch wrote: I like the comments on the left. Wasn't using that space anyways. 16 Jul 2016 17:38 UTC

mrb wrote: Thanks, this is one of my favorite ideas for the design! That and the orange line. 16 Jul 2016 17:49 UTC

1 wrote: 1 16 Jul 2016 18:42 UTC

hey wrote: thanks, neat stuff 16 Jul 2016 18:48 UTC

tester wrote: blah blah 16 Jul 2016 19:01 UTC

John wrote: Very cool! Quick question - How do you take a server out of rotation if one goes down? For example, If I hit a server in the round robin that isn't online, I can force-refresh the page to initiate a new lookup but in that case every 1 in every 3 requests to the site would still fail (assuming only one server is down) - is that correct? Otherwise you mentioned the browser does this automatically - if the browser can see that the domain has multiple A records and the initial connection fails on one of the IPs, it automatically tries to establish a connection on the next IP in the round robin if the first connection times out? 16 Jul 2016 19:05 UTC

mrb wrote: John: I don't have to do anything to take a server out of rotation. Browsers automatically try all IPs, and then stick with the first IP they find that works. Even if some servers are down, your HTTP requests continue to all work. For a really extensive outage (2+ days), I would probably bother to manually update the DNS records. 16 Jul 2016 19:26 UTC

Andrew wrote: Incredibly fast loading time! How are you optimising your assets and code?
You should write a blog on that if you haven't.
16 Jul 2016 20:10 UTC

mreigh wrote: Having multiple A records pointing at different servers is the simplest HA .
Syncying is a different game
16 Jul 2016 20:18 UTC

The Pistachio wrote: Nice work! On a very wide screen there is still a lot of empty space left and right. Would it be possible to stretch the content to use the whole screen? 16 Jul 2016 20:38 UTC

Nomadic wrote: Nicely done static-dynamic hybrid!

But do I understand correctly that when two people comment at the same time, they get the same "next-comment-id" in the post form?
16 Jul 2016 21:03 UTC

Nomadic wrote: Sorry, read the paragraph again and answered my own question. 16 Jul 2016 21:05 UTC

tejkeljsk wrote: testtest 16 Jul 2016 21:06 UTC

user wrote: Style reminds me of Hacker News - colors and unfinished parts, like side spaces. Very thought-provoking article!:) 16 Jul 2016 21:06 UTC

mrb wrote: I will improve that. Thanks!

[Moderator edit: I am glad someone tried to spoof my username :) see explanation of how I authenticate myself to hablog in the 16 Jul 2016 23:11 UTC comment.]
16 Jul 2016 21:07 UTC

Test wrote: Text 16 Jul 2016 21:52 UTC

trishume wrote: May I suggest moving the comments column to the right, perhaps making it a de-emphasized background colour, and putting the post title above the post, or some combination of those. I found the design very disorienting when I first arrived. I didn't know if the text on the right was the post I came here for and what the title on the left referred to. It didn't take that long but it was markedly confusing-feeling.

Also testing presence of *mark* [down](, <b>HTML</b> <script>console.log("XSS");</script>
16 Jul 2016 22:47 UTC

mrb wrote: Andrew: There are no secrets. This page loads quickly because it is only 16 kB of gziped HTML & CSS (plus ~320 kB for the 2 screenshots but they load asynchronously).

The Pistachio: lines longer than 80-90 chars decrease readability because it is harder to track the line when going back to the beginning of the next one (the vast majority of high-profile sites stick to this rule). It would be a bad idea to make them longer, so I have to keep the margins as is.

trishume: I value this "first-time user" experience, thanks! I will consider changing. Also there is intentionally zero support for markdown or HTML markup. Everything is escaped.

I see someone wrote a comment as "mrb". The server did not recognize this as me because of a special way I authenticate myself, which I did not bother to explain. See <secret-passwd> and <secret-user> in hablog.fcgi. Essentially I use the username field as a password and if valid the username is replaced with <secret-user> ("mrb"), the post is saved with the JSON attribute "trusted:1", and the cmt-trusted CSS class is applied to it which styles it appropriately.
16 Jul 2016 23:11 UTC

MrBeny wrote: The warning should be on top, not at the end.
"Warning: proof-of-concept code only. hablog is probably not for you. "
17 Jul 2016 00:29 UTC

butthole bandit wrote: - 17 Jul 2016 05:33 UTC

nanch wrote: ^ 17 Jul 2016 05:34 UTC

josephg wrote: Very cool set of concepts.

One piece of design critique - When I look away from your blog, then back to it its not clear to my eyes which part is the post and which part is comments. I have to parse the layout to figure that out each time. This is especially true at low horizontal resolutions where the comments column is about the same width as the post itself.

It might work better to have a wider gap between the two, and make the background of the comments section a different color, or pick a different font / font style for the comments, or some other form of visual differentiation.
17 Jul 2016 06:06 UTC

jarlbork wrote: Love the indented admin comments - très sysop
And the EOF below is just too cute ❀
17 Jul 2016 16:01 UTC

mrb wrote: trishume, josephg, and all others: thanks, it is clear from all the feedback that I needed to de-emphasize the comments. So I gave them a darker background, smaller font, and reduced the comments column from 500 to 450px. 17 Jul 2016 17:38 UTC

rupy wrote: I have done the same:

It uses distributed JSON files, the one feature I got you miss is tree structured comments! ;)

Also my distribution is completely async = zero IO wait.

The foundation is open source:
17 Jul 2016 20:16 UTC wrote: I though multiple A record was not a real option for HA..

Could you tell me more on that?
(And maybe email me the response also?)

Thanks for your blog post!
18 Jul 2016 12:54 UTC

mrb wrote: Yes hablog implements the most basic HA. I did warn this was "probably not for you" ;)

This serverfault page is slightly inacurrate so let me clarify the failure modes... Chrome on Linux relies on the TCP/IP stack behavior defined by the sysctl setting net.ipv4.tcp_syn_retries. With the default value 6 the SYN packet will be retried 6 times (sent 7 times total). The kernel will wait for a corresponding reply for 1+2+4+8+16+32+64 = 127 sec. So if the first IP is unresponsive, it takes up to 2 min 7 sec for Chrome to try the next one, which is not a great user experience. This is why in addition to multiple A records, more sophisticated HA implementations need to use load balancers and/or anycast and/or DNS to redirect clients away from failed servers in less than 2 min.

I secretely hope that browsers get smarter about this. IMHO they should attempt to open multiple connections to multiple IP addresses, and use only the connection from the first IP that responds with a SYN+ACK.

Note that other failure scenarios are handled much better by browsers. For example if the first IP responds but refuses connections (TCP RST), Chrome will immediately try the next one, without the end-user noticing any slowness.
18 Jul 2016 17:00 UTC wrote: (when you send your pass over http, be careful)

Have you heard about indieweb? You might be interested :) (I could comment from my blog with webmention! and you could use

Thanks for your reply about HA and A records.

load-balancer need to use this kind of mechanism.
DNS is not really reliable, a local dns cache can keep the response for 48h..

So yes, we are left with anycast as the only solution of having real HA.
And the "cheapHA" with A records.

Why nobody worked on it before? Do you think there a kind of lobby from BGP that push toward that, we should only have anycast as real HA?

Where do you think we should lobby for cheapHA becoming realHA?
- HTTP/SPDY protocol?
- each browser (and everything that speak http?)

I think we should lobby at SPDY!
19 Jul 2016 10:57 UTC wrote: Ok, I opened an issue here:
19 Jul 2016 11:15 UTC

mrb wrote: It is a myth that DNS recursive resolvers disregard TTLs and "cache up to 48h". In reality the vast majority of them honor TTLs. I can't find the link at the moment but there was a large-scale study on this by Amazon EC2 engineers a few years ago. They noticed that 99%+ of Web browsers in various parts of the world connect to the new address pretty much when the TTL expires. So DNS failover is a real option and should not be dismissed.

Nobody makes a browser that attempts to connect to multiple IP addresses in parallel simply because no one cares enough :) I am going to reach out to Chrome and the httpbis mailing list to get some feedback on this idea. If the idea is popular and gains traction, other browsers will eventually implement it.
20 Jul 2016 20:03 UTC

mrb wrote: I posted to net-dev:!topic/net-dev/h96ywfjELMc 20 Jul 2016 20:38 UTC

tlogic wrote: Very nice design Marc! I really the comments on the side.

re: authentication. I didn't look at the code but I wonder what will happen if you accidentally mistype the password in the username field? Is it going to post part of your password publicly? :)
22 Jul 2016 19:37 UTC

mrb wrote: Yep it would post part of it publicly with my clunky hack. However Chrome autocompletes it so accidents *should* not happen... ☺ I initially planned to replace this passwd with TOTP and I will eventually do it. 23 Jul 2016 03:24 UTC wrote: SRV records could be interesting also: 30 Jul 2016 11:18 UTC