The Advisory Boar

By Abhijit Menon-Sen <>

Improvements to ansible-vault in Ansible 2

ansible-vault is used to encrypt variable definitions, keys, and other sensitive data so that they can be securely accessed from a playbook. Ansible 2 (not yet released) has some useful security improvements to the ansible-vault command-line interface.

Don't write plaintext to disk

Earlier, there was no way to use ansible-vault without writing sensitive plaintext to disk (either by design, or as an editor byproduct). Now one can use “ansible-vault encrypt” and “ansible-vault decrypt” as filters to read plaintext from stdin or write it to stdout using the new --output option.

# Interactive use: stdin → x (like gpg)
$ ansible-vault encrypt --output x

# Non-interactive use, for scripting
$ pwgen -1|ansible-vault encrypt --output newpass

# Decrypt to stdout
$ ansible-vault decrypt vpnc.conf --output -|vpnc -

These changes retain backwards compatibility with earlier invocations of ansible-vault and make it possible to securely automate the creation and use of vault data. In every case, the input or output file can be set to “-” to use stdin or stdout.

A related change: “ansible-vault view” now feeds plaintext to the pager directly on stdin and never writes plaintext to disk. (But “ansible-vault edit” still writes plaintext to disk.)

Automated rekeying

The vault accepts a --vault-password-file option to be specified in order to avoid the interactive password prompt and confirmation.

With Ansible 2, “ansible-vault rekey” accepts a --new-vault-password-file option that behaves the same way, so it's possible to rekey an already-encrypted vault file automatically, if you pass in a script that writes a new vault password to its stdout. (This operation also doesn't leak plaintext to disk.)

An incidental bugfix also makes it possible to pass multiple filenames to ansible-vault subcommands (i.e., it's now possible to encrypt, decrypt, and rekey more than one file at once–this behaviour was documented, but didn't work).

(Unfortunately, many more important vault changes didn't make it to this release.)

Use the ‘combine’ filter to merge hashes in Ansible 2

One of the most often-requested features in Ansible was a way to merge hashes. This has been discussed many times on the mailing lists and on IRC and on stackoverflow, and implemented in at least five different pull requests submitted to Ansible, and who knows in how many private filter plugins.

Ansible 2 (currently in β2) finally includes a way to do this: the ‘combine’ filter. The filter documentation has examples of its use, but here's the basic idea:

{'a':1, 'b':2}|combine({'b':3})
    → {'a':1, 'b':3}
{'a':{'x':1}}|combine({'a':{'y':2}}, recursive=True)
    → {'a':{'x':1, 'y':2}}

The “hash_behaviour=merge” configuration setting offers similar (recursive-only) functionality, but it's a global setting, and not convenient to use.

The new combine filter makes it possible to build up hashes using set_fact. Note the use of default({}) to address the possibility that x is not defined.

# x → {'a': 111, 'b': 222, 'c': 333}
- set_fact:
    x: "{{ x|default({})|combine({item.0: item.1}) }}"
  with_together:
    - ['a', 'b', 'c']
    - [111, 222, 333]

Thanks to the union filter, you can do the same with lists. Combining these techniques makes it possible to build up complex data structures dynamically.

# y → [{'a':123}, {'b':456}, {'c':789}]
- set_fact:
    y: "{{ y|default([])|union([{item.0: item.1}]) }}"
  with_together:
    - ['a', 'b', 'c']
    - [111, 222, 333]

Renault Duster AWD 2014: one-year review

Hassath and I bought a Renault Duster AWD a year ago. It's a capable car that serves our needs well under demanding conditions in Uttarakhand. Here's our detailed review.

Read more…

Birds named after women

I've read many pieces about the people after whom birds are named, and it struck me recently that most of them are male. Not surprising, since there must have been many more male ornithologists than women; but there are nevertheless many birds named after women. Because of the regularity of Latin grammar, we can find a considerable number just by looking for names that end in -ae.

Read more…

Academic Journals Online is a scam

Academic Journals Online is a predatory publisher with a fraudulent list of editorial board members.

Read more…

The rituals of automobile worship

Do I really need to decarbonise my engine?

Read more…

Translations or linkbait?

Be wary of offers to translate your pages into other languages—they're often (but not always) low-effort attempts to accumulate inbound links.

Read more…

Managing schema changes

Yesterday, I happened upon this video from PGCon 2012 of David Wheeler talking about his schema management tool named Sqitch, and thence also discovered depesz's Versioning scripts.

I have wrestled with schema migrations for many years, so I found David's presentation very interesting. Sqitch (without a "u") has many compelling features. For example, you can "sqitch deploy --untracked" to test a change you haven't committed, then revert to the last committed revision before you edit or commit the change. depesz's scripts are less magical, but offer similar capabilitites.

In particular, one thing is common to both systems: the schema exists in the repository as a number of interdependent changes, each of which must have a name (whether the names are artificial or assigned by the user is immaterial; some kind of tag is required for dependency resolution). To create the whole schema, you have to assemble the pieces in order, and to see the whole schema, you have to look at the database. The database is the canonical representation of the data model.

I prefer to think of my schema as a part of my source code, so I keep a complete version in a text file (or files), presented in the order that I want to explain it, with comments in the right places, and not leave that responsibility to "pg_dump -s".

What difference does it make?

  • ✓ Anyone can look at the source code for the schema and understand it in the "preferred form for modifications".
  • ✗ Making changes means writing an upgrade snippet, and downgrade snippet, and changing the main file.
  • ✓ Creating a new instance of the database always means feeding a small number of files to psql. No need to build a big schema up step by step.
  • ✗ Testing changes becomes harder—the upgrade/downgrade scripts are tested immediately, but in practice the main files is tested only on the next from-scratch deployment.
  • ✓ There is no need for dependency management or complex ordering between changes. Deploying needs no cleverness, only psql. Maybe a little shell script.
  • ✗ The natural way to represent a series of changes is with the numbered files that David so hates, and every number is an incipient merge conflict.

How these points stack up against each other depends on the situation. For example, a single web service may care less about deploying from scratch than an installable package. If the schema changes frequently, the testing overhead may outweigh other considerations. A project with one or two developers may not have to worry about numbering conflicts, and so on. Being able to read through the schema ranks highly for me.

I'll write about our approach to schema management in Archiveopteryx later.

The end of the Exide saga

In August 2010, I filed a case at my local District Consumer Forum in Delhi over Exide's refusal to replace a faulty battery under warranty. In November 2012, after nearly two and a half years of filings, hearings, and adjournments, the Consumer Forum issued a judgement in my favour, asking Exide to replace the faulty battery and pay compensation.

Today, Exide sent me three new batteries and a cheque.

Consumer Court: Justice!

I called the District Consumer Forum this morning to enquire about the judgement I was told to expect in a week at my last hearing in October. I was told the judgement was ready, and that I should collect it. I did so forthwith.

Here's the interesting part (the last paragraph):

The complainant of this case has been subjected to harassment, he need[s] to be compensated. We allow this complaint and direct the OP to replace the battery in question within a period of 30 days with a fresh warranty of one year. We further award a compensation of Rs.6000/- to the complainant which will also include the cost of litigation.

The judgement was entered on 2012-11-05, and the office keeps one copy for each party to the case for a month; if it is not collected within that period, it is dispatched by registered post. I expect Exide will receive its copy in the second week of December.

The wheels of justice turn slowly, but they grind moderately fine.