The Advisory Boar

By Abhijit Menon-Sen <>

Improvements to ansible-vault in Ansible 2


ansible-vault is used to encrypt variable definitions, keys, and other sensitive data so that they can be securely accessed from a playbook. Ansible 2 (not yet released) has some useful security improvements to the ansible-vault command-line interface.

Don't write plaintext to disk

Earlier, there was no way to use ansible-vault without writing sensitive plaintext to disk (either by design, or as an editor byproduct). Now one can use “ansible-vault encrypt” and “ansible-vault decrypt” as filters to read plaintext from stdin or write it to stdout using the new --output option.

# Interactive use: stdin → x (like gpg)
$ ansible-vault encrypt --output x

# Non-interactive use, for scripting
$ pwgen -1|ansible-vault encrypt --output newpass

# Decrypt to stdout
$ ansible-vault decrypt vpnc.conf --output -|vpnc -

These changes retain backwards compatibility with earlier invocations of ansible-vault and make it possible to securely automate the creation and use of vault data. In every case, the input or output file can be set to “-” to use stdin or stdout.

A related change: “ansible-vault view” now feeds plaintext to the pager directly on stdin and never writes plaintext to disk. (But “ansible-vault edit” still writes plaintext to disk.)

Automated rekeying

The vault accepts a --vault-password-file option to be specified in order to avoid the interactive password prompt and confirmation.

With Ansible 2, “ansible-vault rekey” accepts a --new-vault-password-file option that behaves the same way, so it's possible to rekey an already-encrypted vault file automatically, if you pass in a script that writes a new vault password to its stdout. (This operation also doesn't leak plaintext to disk.)

An incidental bugfix also makes it possible to pass multiple filenames to ansible-vault subcommands (i.e., it's now possible to encrypt, decrypt, and rekey more than one file at once–this behaviour was documented, but didn't work).

(Unfortunately, many more important vault changes didn't make it to this release.)

jQuery UI autocompletion and Javascript hijacking


I was pleased to discover that the jQuery UI library includes an autocomplete widget that looks good and works well, and I started using it straightaway to liven up my search boxes. The simplest way to use it is to return an array of JSON objects matching a query from /some/url, and feed the client the following bit of Javascript:

$('#query').autocomplete({source: '/some/url'});

All was well, until I came across this paper on Javascript Hijacking. A quick summary: someone includes a <script> tag on their web site pointing to, and write some Javascript to steal the data (by redefining the array constructor, or overriding the property setter for objects). Due to the very specific circumstances of this attack, it is often misunderstood or ignored as "just another XSS attack", but it can lead to leakage of confidential data. The attack was originally demonstrated by Jeremiah Grossman to steal contacts from someone's GMail account.

The paper outlines a few possible solutions: deal with it like any other cross-site request forgery by requiring the request to include a random unguessable (which means using POST or including the token in the URL), or prefixing the response with "while(1);" or some other statement that would cause the browser to fail to execute the code (e.g. wrap it inside a comment).

Tony Cook pointed out to me that the attack hinges on the fact that an array literal is a valid statement in Javascript, but an object literal by itself is not. This suggests that returning the matches as an object rather than array will defeat the attack:

{"matches": [{"label": "one", "value": 1}, ...]}

Although I can't be certain that no Javascript implementation will treat this as a valid statement, I like this approach better than using POST to fetch a completion list or extending CSRF protections to GET requests. Fortunately, jQuery UI's autocomplete widget is more than flexible enough to accommodate an array of matches wrapped in an object:

    source: function (request, response_cb) {
            '/some/url', {term: request.term},
            function (data) {

Unfortunately, this is already much more code than people should have to write to handle this case, and it doesn't even handle errors (if any do occur and are not properly handled, the autocomplete widget will behave strangely). I think jQuery UI should handle this case by default.

This code can easily be adapted to cope with a "while(1);" prefix—treat the response as a string and remove the prefix, then eval the remainder and pass the resulting object to the response callback. If someone can convince me that an object literal is insecure—something of which I am quite willing to be convinced—then I'll change my code to do that. (If this seems a too-casual attitude, it's because none of my autocompletion data are confidential. Defeating casual attacks is fine for the moment.)

Mojolicious session cookies


Mojolicious comes with safe, easy-to-use session cookies out of the box. You just write…

$self->session(key => "some value");

…and $self->session('key') will retrieve the value in subsequent requests by the same client.

Session data are stored in a hash, which may contain anything subject to a maximum size of 4KB. The hash is serialised and signed with a message authentication code to form a tamper-proof session cookie which expires after an hour by default. When the cookie is presented by a client, the server can verify the signature without any stored state. Cookies that fail signature verification are discarded before they ever reach the application code.

Read more…

Mixing HTTP and HTTPS access to an application


I'm working on a web application that is running behind a reverse proxy. Most people will use HTTP to access it, but anyone who wants to login must use HTTPS until they logout, to avoid leaking plaintext passwords and session cookies. This is a brief note about the configuration. I'm using Mojolicious and Apache 2.2's mod_proxy, but the implications of providing mixed HTTP/HTTPS access through a reverse proxy are relevant to other implementations.

Read more…

Cryptographically secure randomness in Perl


I can usually sidestep the need for cryptographically secure randomness, but I decided to investigate the available options to generate CSRF protection tokens.

On Linux, /dev/random is a good, cryptographically secure random number generator, but the entropy available without special hardware support is severely limited. Reads from /dev/random block for however long it takes (even minutes) to gather enough entropy to satisfy the request. Entropy is in high demand throughout the system, so this is not a resource that can be indiscriminately drawn upon.

One standard solution to this problem is to use the output of a randomly keyed block cipher running in counter (CTR) mode. This provides only as much entropy as the random key, but if /dev/random is used to generate the key, the result is suitable for many purposes. Andrew Main's Data::Entropy module implements this strategy in Perl.

Read more…