The Advisory Boar

By Abhijit Menon-Sen <>

Handling SSH host key prompts in Ansible

; updated

I've written about the various SSH improvements in Ansible 2, including a rewrite of the connection plugin. Unfortunately, the problem that originally motivated the rewrite currently remains unsolved.

Competing prompts

If you ssh to a host for which your known_hosts file has no entry, you are shown the host's key fingerprint and are prompted with Are you sure you want to continue connecting (yes/no)?. If you run ansible against multiple unknown hosts, however, the host key prompts will just stack up:

The authenticity of host 'magpie (a.b.c.d)' can't be established.
ECDSA key fingerprint is 2a:5a:4c:4b:e0:40:de:8b:9b:e6:0f:90:45:68:89:fc.
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'hobby (e.f.g.h)' can't be established.
RSA key fingerprint is 61:84:90:47:f7:0f:7b:a2:d5:09:98:6f:bb:3c:50:d9.
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'raven (i.j.k.l)' can't be established.
RSA key fingerprint is ab:97:c2:7d:b6:8e:c3:ab:78:a2:20:04:af:9c:6f:2b.
Are you sure you want to continue connecting (yes/no)?

The processes compete for input, so typing “yes” may or may not work:

Please type 'yes' or 'no': yes
Please type 'yes' or 'no': yes
Please type 'yes' or 'no': 

Worse still, if some of the targeted hosts are known, output from their tasks may cause the prompts to scroll off the screen, and ansible will appear to hang.

Inter-ssh locking

The solution is to acquire a lock before executing ssh and releasing it once the host key prompt (if any) is negotiated. Ansible 2 had some code copied from 1.9 to implement this, but it was agonisingly broken. It wouldn't have always acquired the lock or released it correctly, but the actual locking was commented out anyway because of lower-level changes, so it just scanned known_hosts twice for every connection. Even if the locking had worked, the lock would have been held until ssh exited.

I submitted patches (12195, 12212, 12236, 12276) to add a connection locking infrastructure and use it to hold a lock only until ssh had verified the host key (not until it finished). Although most of the changes were merged, the actual ssh locking was rejected because it would (unavoidably) wait for ssh to timeout while trying to connect to unreachable hosts.

One of the maintainers recently said they may reconsider this (because it's painful to deal with any number of newly provisioned hosts otherwise), so I have opened a new PR, but it has not yet been merged.

Update: The maintainers went with a different approach to solve the problem. Instead of using locking inside the connection plugin, this checks the host key as a separate step at the strategy level, at the expense of having to parse the known_hosts file to check if a host's key is already known. I think that's a fragile solution, but it does eliminate the locking concerns and improve upon the status quo.

Another update: The commit referenced above was reverted later the same day, for some reason the maintainers did not see fit to record in the commit message. So we're right back to the broken starting point.

Enabling SSH pipelining by default in Ansible

While writing about ansible_ssh_pipelining earlier, it occurred to me that pipelining could be made to work with requiretty, thus saving having to edit /etc/sudoers, and even making it possible to use su (which always requires a tty). This would mean pipelining could be enabled by default, for a noticeable performance boost.

Here's a working implementation (see the commit message for gory details) that I've submitted as a PR for Ansible 2. Let's hope it's merged soon.

More control over SSH pipelining in Ansible 2

; updated

SSH pipelining is an Ansible feature to reduce the number of connections to a host.

Ansible will normally create a temporary directory under ~/.ansible (via ssh), then for each task, copy the module source to the directory (using sftp or scp) and execute the module (ssh again).

With pipelining enabled, Ansible will connect only once per task using ssh to execute python, and write the module source to its stdin. Even with persistent ssh connections enabled, it's a noticeable improvement to make only one ssh connection per task.

Unfortunately, pipelining is disabled by default because it is incompatible with sudo's requiretty setting (or su, which always requires a tty). This is because of a quirk of the Python interpreter, which enters interactive mode automatically when you pipe in data from a (pseudo) tty.

Update 2015-11-18: I've submitted a pull request to make pipelining work with requiretty. The rest of this post still remains true, but if the PR is merged, the underlying problem will just go away.

Pipelining can be enabled globally by setting “pipelining=True” in the ssh section of ansible.cfg, or setting “ANSIBLE_SSH_PIPELINING=1” in the environment.

With Ansible 2 (not yet released), you can also set ansible_ssh_pipelining in the inventory or in a playbook. You can leave it enabled in ansible.cfg, but turn it off for some hosts (where requiretty must remain enabled), or even write a play with pipelining disabled in order to remove requiretty from /etc/sudoers.

- lineinfile:
    dest: /etc/sudoers
    line: 'Defaults requiretty'
    state: absent
  sudo_user: root
  vars:
      ansible_ssh_pipelining: no

The above lineinfile recipe is simplistic, but it shows that it's now possible to disable requiretty, even if it's by replacing /etc/sudoers altogether.

Note the use of another Ansible 2 feature above: vars can also be set for individual tasks (and blocks), not only plays.

Parallel task execution in Ansible

At work, I have a playbook that uses the Ansible ec2 module to provision a number of EC2 instances. The task in question looks something like this:

- name: Set up EC2 instances
  ec2:
    region: "{{ item.region }}"
    instance_type: "{{ item.type }}"
    …
    wait: yes
  with_items: instances
  register: ec2_instances

Later tasks use instance ids and other provisioning data, so each task must wait until it's completed; but provisioning instances can take a long time—up to several minutes for spot instances—so creating a 32-node cluster this way is painfully slow. The obvious solution is to create the instances in parallel.

Ansible will, of course, dispatch tasks to multiple hosts in parallel, but in this case all the tasks must run against localhost. Besides, although each iteration of a loop is executed separately, it's not possible to dispatch them in parallel. Multiple hosts can be made to execute the entire loop in parallel, but it's not possible to hand off one iteration to one host and another to a different host in parallel.

You can get close with “delegate_to: {{item}}”, but each step of the loop will be completed before the next is executed (with Ansible 2, it's possible that a custom strategy plugin could dispatch delegated loop iterations in parallel, but the included free execution strategy doesn't work this way). The solution is to use “fire-and-forget” asynchronous tasks and wait for them to complete:

- name: Set up EC2 instances
  ec2:
    …
    wait: yes
  with_items: instances
  register: ec2_instances
  async: 7200
  poll: 0

- name: Wait for instance creation to complete
  async_status: jid={{ item.ansible_job_id }}
  register: ec2_jobs
  until: ec2_jobs.finished
  retries: 300
  with_items: ec2_instances.results

This will move on immediately from each iteration without waiting for the task to complete, and separately wait for the tasks to complete using async_status. The 7200 and 300 are arbitrary “longer than it could possibly take” choices. Note that we are polling the completion status one by one, so we'll start polling for the completion of iteration #2 only after #1 is complete, no matter how long either task takes. But in this case, since I have to wait for all of the tasks to complete anyway, it doesn't matter.

Strange cryptographic decisions in Ansible vault

I wrote about some useful changes to ansible-vault in Ansible 2 in an earlier post. Unfortunately, another significant change to the vault internals was rejected for Ansible 2.

Vault cryptography

The VaultAES256 class implements encryption and decryption. It uses sensible building blocks: PBKDF2 for key generation with a random salt, AES-CTR for encryption, and HMAC-SHA-256 for authentication (used in encrypt-then-mac fashion). This is a major improvement over the earlier VaultAES class, which used homebrew key generation and an SHA-256 digest alone for “verification”.

Nevertheless, the code has some embarrassing oversights. They are not vulnerabilities, but they show that the code was written with… rather less familiarity with cryptography than one might wish:

  • Plaintext is padded to the AES block size, but this is unnecessary because AES-CTR is used as a stream cipher.
  • An extra 32-byte block of PBKDF2 (10,000 iterations) output is derived to initialise the 16-byte IV, and the other half discarded; but this is unnecessary because the IV can be 0 (the salt ensures that we do not use the same key to encrypt the same plaintext).

Finally, the ciphertext is passed through hexlify() twice, thereby inflating it to 4x the size (instead of using, say, Base64). This is the least significant and yet the most annoying problem.

The most visible effects of the over-enthusiastic PBKDF2 use were mitigated by a pull request to use an optimised PBKDF2 implementation. This reduced the startup time by an order of magnitude for setups that loaded many vault-encrypted files from group_vars and host_vars.

All of these problems were solved by PR #12130, which saw several rounds of changes and was slated for inclusion in Ansible 2, but was eventually rejected by the maintainers because there wasn't “anyone in-house to review it for security problems and it's late to be adding it for v2”.

Other changes that didn't make it

A couple of other often-requested Vault changes fell by the wayside en route to Ansible 2:

  • GPG support for the vault was submitted as a PR over a year ago, but the code is now outdated after an initial rebase to the v2 codebase.
  • Lookup support (with the file lookup plugin, and also with the copy module) was partly implemented but never completed and merged.

Many people left +1 comments on Github to indicate their support for these features. I hope someone wants them enough to work on them for v2.1, and that they have better luck getting this work merged than I did.

SSH configuration in Ansible 2

The ability to use “jump hosts” with Ansible is another often-requested feature. This has been discussed repeatedly on the mailing list and on Stackoverflow, has had a number of howto articles written about it, and multiple independent implementations have been submitted as pull requests to Ansible.

The recommended solution was to set a ProxyCommand in ~/.ssh/config. This meant duplicating inventory data and keeping two sources of connection information in sync. It worked, but grew rapidly less manageable with a larger inventory. Similarly, the ssh_config inventory plugin was a makeshift solution at best.

This post describes the general mechanism provided in Ansible 2 (not yet released) to make SSH configuration changes—including jump hosts—without depending on any data external to Ansible.

SSH configuration

The ssh_args setting in the ssh_connection section of ansible.cfg is a global setting whose contents are prepended to every command-line for ssh/scp/sftp. This behaviour has been retained unmodified for backwards compatibility, but I don't recommend its use, because it overrides the default persistence settings.

In addition to the above, the new ansible_ssh_common_args inventory variable is appended to every command-line for ssh/scp/sftp. This can be set in the inventory (for a group or a host) or in a playbook (for a play, or block, or task). This is the place to configure any ProxyCommand you want to use.

[gatewayed_hosts:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p someuser@jumphost.example.com"'

In addition to that, the new ansible_ssh_extra_args variable is appended only to command-lines for ssh. There are analogous ansible_scp_extra_args and ansible_sftp_extra_args variables to change scp and sftp command-lines. This allows you to do truly odd things like open a reverse-tunnel to the control node with -R (which is an option only ssh accepts, not scp or sftp).

The --ssh-common-args command-line option is useful when debugging (there's also --ssh-extra-args, --scp-extra-args, and --sftp-extra-args). Note that any values you set on the command-line will be overriden by the inventory or playbook settings described above (which seems backwards, but that's how Ansible handles other command-line options too).

Also note that ansible_user, ansible_host, and ansible_port are now preferred to the old ansible_ssh_* versions.

Internal changes

Once again, the modest user-visible changes are accompanied by major changes internally. The SSH connection plugin was rewritten to be more maintainable, and an entire class of “my connection just hangs” and other bugs (especially around privilege escalation) were fixed in the process.

Host names and patterns in Ansible 2

Nearly lost among the many significant changes in Ansible 2 (not yet released) are a number of related changes to how hostnames and host patterns are handled.

Host patterns

Ansible uses patterns like foo* to target managed nodes; one could match multiple patterns by separating them with colons, semicolons, or commas, e.g., foo*:bar*. The use of colons is now discouraged (and will eventually be deprecated) because of the conflict with IPv6 addresses, and the (undocumented) use of semicolons attracts a deprecation warning. Ansible 2 recommends only the comma: foo*,bar*.

This usage applies to the list of target hosts: for a play, the host pattern argument to the ansible command, and the argument to ansible-playbook --limit.

The groupname[x-y] syntax is no longer supported. Use groupname[0:2] to match the first three hosts in a group. The first host is g[0], the last is g[-1], and g[1:] matches all hosts except g[0].

Inventory hostnames

Ansible 2 requires inventory hostnames to be valid IPv4/IPv6 addresses or hostnames (i.e., x.example.com or x, but not x..example.com or x--). As an extension, it accepts Unicode word characters in hostname labels. Any mistakes result in specific parsing errors, not mysterious failures during execution.

Inventory hostnames may also use alphabetic or numeric ranges to define more than one host. For example, foo[1:3] defines foo1 through foo3, while foo[x:z:2] expands to fox and foz. Addresses may use numeric ranges: 192.0.2.[3:42].

IPv6 addresses

A number of problems with the parsing of IPv6 addresses have also been fixed, and their behaviour has been made consistent across the inventory (.ini files) and in playbooks (e.g., in hosts: lines and with add_host).

All of the recommended IPv6 address notations (from spelling out all 128 bits to the various compressed forms) are supported. Addresses with port numbers must be written as [addr]:port. One can also use hexadecimal ranges to define multiple hosts in inventory files, e.g. 9876::[a:f]:2.

A couple of small but necessary bugfixes go hand-in-hand with the parsing changes, and fix problems with passing IPv6 addresses to ssh and to rsync. Taken together, these changes make it possible to use IPv6 in practice with Ansible.

Bigger on the inside

The changes described above merit only a couple of lines in the 2.0 changelog, but the implementation involved a complete rewrite of the inventory file parser and the address parser. A variety of incidental bugs were fixed along the way.

The upshot is that the code—for the first time—now imposes syntactic requirements on host names, addresses, and patterns in a systematic, documented, testable way.

Improvements to ansible-vault in Ansible 2

ansible-vault is used to encrypt variable definitions, keys, and other sensitive data so that they can be securely accessed from a playbook. Ansible 2 (not yet released) has some useful security improvements to the ansible-vault command-line interface.

Don't write plaintext to disk

Earlier, there was no way to use ansible-vault without writing sensitive plaintext to disk (either by design, or as an editor byproduct). Now one can use “ansible-vault encrypt” and “ansible-vault decrypt” as filters to read plaintext from stdin or write it to stdout using the new --output option.

# Interactive use: stdin → x (like gpg)
$ ansible-vault encrypt --output x

# Non-interactive use, for scripting
$ pwgen -1|ansible-vault encrypt --output newpass

# Decrypt to stdout
$ ansible-vault decrypt vpnc.conf --output -|vpnc -

These changes retain backwards compatibility with earlier invocations of ansible-vault and make it possible to securely automate the creation and use of vault data. In every case, the input or output file can be set to “-” to use stdin or stdout.

A related change: “ansible-vault view” now feeds plaintext to the pager directly on stdin and never writes plaintext to disk. (But “ansible-vault edit” still writes plaintext to disk.)

Automated rekeying

The vault accepts a --vault-password-file option to be specified in order to avoid the interactive password prompt and confirmation.

With Ansible 2, “ansible-vault rekey” accepts a --new-vault-password-file option that behaves the same way, so it's possible to rekey an already-encrypted vault file automatically, if you pass in a script that writes a new vault password to its stdout. (This operation also doesn't leak plaintext to disk.)

An incidental bugfix also makes it possible to pass multiple filenames to ansible-vault subcommands (i.e., it's now possible to encrypt, decrypt, and rekey more than one file at once–this behaviour was documented, but didn't work).

(Unfortunately, many more important vault changes didn't make it to this release.)

Use the ‘combine’ filter to merge hashes in Ansible 2

One of the most often-requested features in Ansible was a way to merge hashes. This has been discussed many times on the mailing lists and on IRC and on stackoverflow, and implemented in at least five different pull requests submitted to Ansible, and who knows in how many private filter plugins.

Ansible 2 (currently in β2) finally includes a way to do this: the ‘combine’ filter. The filter documentation has examples of its use, but here's the basic idea:

{'a':1, 'b':2}|combine({'b':3})
    → {'a':1, 'b':3}
{'a':{'x':1}}|combine({'a':{'y':2}}, recursive=True)
    → {'a':{'x':1, 'y':2}}

The “hash_behaviour=merge” configuration setting offers similar (recursive-only) functionality, but it's a global setting, and not convenient to use.

The new combine filter makes it possible to build up hashes using set_fact. Note the use of default({}) to address the possibility that x is not defined.

# x → {'a': 111, 'b': 222, 'c': 333}
- set_fact:
    x: "{{ x|default({})|combine({item.0: item.1}) }}"
  with_together:
    - ['a', 'b', 'c']
    - [111, 222, 333]

Thanks to the union filter, you can do the same with lists. Combining these techniques makes it possible to build up complex data structures dynamically.

# y → [{'a':123}, {'b':456}, {'c':789}]
- set_fact:
    y: "{{ y|default([])|union([{item.0: item.1}]) }}"
  with_together:
    - ['a', 'b', 'c']
    - [111, 222, 333]