The Advisory Boar

By Abhijit Menon-Sen <>

Avoiding asynchronous callback hell in Archiveopteryx


I've read many mentions of “callback hell” recently, especially in discussions about Javascript programming. This is the problem of deeply nested code when trying to manage a sequence of asynchronous actions. The many suggested remedies range from splitting the code up into many small named functions to async to promises to futures (and probably other things besides; I haven't tried to keep up).

FutureJS, for example, is described as helping to tame your wild, async code.

I have no opinion about any of these solutions. I don't work with any complex Javascript codebases, and asynchronous actions in my Mojolicious applications have been easy to isolate so far. But I do have opinions about writing asynchronous code, and this post is about why I'm not used to treating it as though it needed “taming”.

Here's some asynchronous code picked more or less at random from Archiveopteryx: the handler for the IMAP CREATE command (edited to remove uninteresting code to generate better error messages).

void Create::execute()
    if ( state() != Executing )

    if ( !d->parent ) {
        d->parent = Mailbox::closestParent( d->name );
        if ( !d->parent ) {
            error( No, "…syntax error…");

        requireRight( d->parent,
                      Permissions::CreateMailboxes );

    if ( !permitted() )

    if ( !transaction() ) {
        d->m = Mailbox::obtain( d->name, true );
        setTransaction( new Transaction( this ) );
        if ( !d->m ) {
            error( No, "…invalid name…" );
        else if ( d->m->create( transaction(),
                                imap()->user() ) == 0 )
            error( No, "…already exists…" );
        Mailbox::refreshMailboxes( transaction() );

    if ( !transaction()->done() )

    if ( transaction()->failed() ) {
        error( No, "…database error…" );


This code is straight-line enough that I only had to break two long lines for it to fit in a 37em-wide web page, but its operation is entirely asynchronous.

The Create class inherits from the EventHandler class. Any object of this class can have its execute() method called at any time—the code must be able to figure out what remains to be done and continue its work. Asynchronous processes receive a copy of the caller's this pointer, perform some operations (e.g. database queries), and call execute() on the invoking object. A few carefully-written helper functions make the control flow obvious.

In the example above, the method is first called by the IMAP handler. It calls requireRight() on the parent mailbox, and expects to be called back when permitted() is true. (If the permissions are not sufficient, requireRight() issues a suitable error.)

Then the code sets up a transaction to create the requested mailbox and executes it via commit(), expects to be called back when it's done(), issues an error if the transaction failed(), or completes successfully. Once setTransaction() is called, transaction() returns a pointer to the current transaction, so that branch will never be executed on future invocations of execute().

This pattern of checking each pre-requisite one by one and updating state along the way scales cleanly to more complex commands, including those that issue database queries, wait for the results, and then issue more queries, all within a transaction.

Notice that this code doesn't look dramatically different from blocking code, but it's easy to read and understand its requirements and the sequence of operations. We depended on this property while reviewing code.

Arnt and I were using this technique extensively in 2003. I'm not making any claims about its wider applicability, but it certainly prevented Archiveopteryx from ever descending into callback hell.

Git: post-receive hook for XMPP notifications


I wrote long ago about the trouble I had with Net::XMPP while setting up a notification hook for Archiveopteryx, but I didn't think anyone would find the script itself particularly interesting. But people have asked me about it, so here it is.

Read more…

Managing release branches: git merge vs. p4 integrate


When the Archiveopteryx source code lived in Perforce, we would submit everything to the src/main branch, review and test, then use p4 integrate to merge selected changes into release branches like src/rel/2.0. The only changes we submitted directly to the latter branches were release-specific, like setting the version number in Jamsettings. We could safely re-run p4 integrate at any time, and it would show us only those changes that we had not already reviewed.

When we moved to git, we continued to work this way—development happened in master, and we would use git cherry-pick to integrate or backport selected changes into older release branches. New release branches were created by branching from the current master, and maintained the same way. We did this for almost two years and several releases, but it was not much fun.

There was no easy way to answer the question Which commits do I need to consider for inclusion? for any given release branch. In theory, git log --cherry-pick will tell you, but it doesn't work very well. We used to do monthly releases, but it was so painful to deal with the build-up of commits at release time that we were forced to backport changes in smaller batches throughout the month (but that was not, in itself, a bad thing).

Read more…

#ifdef considered harmful


Speaking of portability, here's a link to Henry Spencer and Geoff Collyer's classic 1992 USENIX paper #ifdef Considered Harmful, or Portability Experience With C News.

We believe that a C programmer's impulse to use #ifdef in an attempt at portability is usually a mistake. Portability is generally the result of advance planning rather than trench warfare involving #ifdef.

It's been eighteen years since its publication, but not enough people have read that paper yet.

Mirroring a git repository


In a recent conversation on the Archiveopteryx mailing list, someone suggested that we move the code to Github because it feels very far away right now (hosted in a git repository on our own server). Some people agreed, and said that a project hosted on github (or SourceForge, or would get us more visibility, while others strongly preferred the status quo.

Although we decided against moving (and will instead focus on other ways to gain transparency and visibility as the project moves away from being company-driven), it was clear that having a mirror of our repository on Github (or elsewhere) couldn't hurt; and that's what this post is about.

The ten-second summary

The Archiveopteryx source code lives in a repository on, and the developers push commits to it. We set up github and gitorious as remote repositories on the server, and added a(nother) post-receive hook to push any new commits on to those two repositories. Voilà! Zero-effort mirrors of our code.

Read more…

Portability and optimism


We've always taken a very conservative approach to portability with Archiveopteryx. We run it ourselves under Linux and FreeBSD on x86 and x86_64 systems, and those are the only platforms we've ever claimed to support. Between the developers and our major users, we have access to enough systems of that description that we can be reasonably confident that the software will work without unpleasant surprises.

Beyond that, we've never been very ambitious. Once in a while, someone would ask Does it work on NetBSD? or Solaris or OS X. The answer was usually yes, perhaps after a few (relatively minor) tweaks. We would add #ifdefs and other portability patches to our code only after someone suffered from an actual problem, never before. If we couldn't (or didn't, regularly) test it, it wasn't supported.

This gloomy approach is in sharp contrast to the confidence encouraged by autoconf. Had we used it, we could have saved a few users the trouble of dealing with compile errors. We would have been testing our software only on the few systems we did anyway (which happen to be the commonest platforms in use), but autoconf would have helped us to feel more portable. I don't want to think about what that would have done to the number of unreproducable bugs we had to analyse; they were enough trouble as it was.

I was recently reminded of our curmudgeonly attitude in a conversation with Hassath, who did a project on FLOSS video editing software. One of her complaints was that it was hard to find out not just what the hardware and software dependencies of some program were, but also what configurations were known to work. The documentation usually implied that any Linux system would work fine, but in practice, people running a different distribution routinely encountered problems that the developers had never seen and had no idea how to address. This was the case even on her quite unremarkable Ubuntu and amd64 test system, and she found it extremely frustrating.

Try as I might, I can't see optimism as a good approach to portability.