Ansible, more than just SSH

I often see the statement Ansible manages clients using SSH or WinRM. While this is a true statement, it is also incomplete.

Ansible currently has 26 connection types which you can find at

For me personally, some of the other interesting connection types are :

  • netconf
  • network_cli

    netconf and network_cli are commonly used to perform network device automation.
  • psrp

    psrp is similar to WinRM however it has the added benefit of being used via a proxy which is very useful when you have to consider bastian hosts.
  • vmware_tools

    vmware_tools is a relatively new addition to the ansible family and allows you to execute commands, transfer files to vSphere based systems without using the VM network.

Most Ansible developers will have also used connection type local in many of their playbooks, probably without realizing that it was a different connection type.

Ansible is also extensible. If you need to connect to something weird and wacky (but of great importance to you) then you can develop your own modules and connection plugin (or other sorts of plugins) – see

Ansible versatility doesn’t end there though and many newcomers to ansible don’t realise that you can also manage multiple clouds, container platforms and virtualisation platforms – it’s the Swiss Army knife of IT automation.

Ansible Ansible Tower ServiceNOW

Adding a custom credential type in Ansible Tower for ServiceNOW

It’s been one of those weeks and I needed to get some more experience with the ansible ServiceNOW modules, specifically within Ansible Tower. It looked pretty simple and in fact it really was quite simple.

Ansible Tower neatly stores credentials within it – or externally if that fills you with joy. There isn’t a ServiceNow credential type in Ansible Tower. Undeterred, I thought I would use machine credentials, but tower has an annoying behavior of only allowing 1 instance of each credential type attached to a tower template and I am already using machine credentials in my template.

Fortunately on the left hand side of the tower ui there’s an entry labelled credential types

When creating the credential type you need to supply two (2) pieces of information. The first piece is called the Input configuration – or what the fields look like on the web ui when you create a credential of this type and secondly, the Injector configuration which details what do do with thew credentials.

In my case, the new credential type is called SNOW and i’m providing the instance name, username and password as part of the structure for this credential – via the Input configuration and then I detail that I want to store this data in environment variables that will be accessible from my playbook when run in tower.

Input configuration

  - id: instance
    type: string
    label: Instance
  - id: username
    type: string
    label: Username
  - id: password
    type: string
    label: Password
    secret: true
  - instance
  - username
  - password

Injector configuration

  SN_INSTANCE: '{{instance}}'
  SN_PASSWORD: '{{password}}'
  SN_USERNAME: '{{username}}'

The way you use them in your playbook is quite simple. The following is a snippet of playbook showing that.

   - name: Create an incident
       username: '{{ lookup("env", "SN_USERNAME") }}'
       password: '{{ lookup("env", "SN_PASSWORD") }}'
       instance: '{{ lookup("env", "SN_INSTANCE") }}'
       state: present
         short_description: "This is a test incident opened by Ansible"
         severity: 3
         priority: 2
     register: new_incident

Ansible ESXi Red Hat

ESXi 6+ PXE Boot from Centos 8 – Nope?

I was rebuilding some Lab ESXi physical hosts, but also thought i’d upgrade my ‘builder’ system to Centos 8. My builder system uses a bunch of Ansible playbooks to create the necessary DHCP, TFTP etc configuration to support PXE booting multiple OS types – including ESXi 6.5/6.7.

I started with test builds of Centos 7/8 using my now Centos 8 build server and it was all fine.

However….. when I tried to build ESXi 6.5+ the TFTP delivered the ESXi mboot.c32 file to the host (via syslinux 6.04 which is new to Centos 8) but it couldn’t be loaded. After several hours of frustration I tried downgrading to the syslinux 3.86 version mentioned in . Sadly you can’t install that version on Centos 8 without considerable grief.

I was able to install syslinux 4.05 on Centos 8 and lo and behold the build process works. Clearly something in syslinux 6 doesn’t like PXE booting ESXi. I’m not sure what yet, but hopefully this blog post at least gives people a workaround to a frustrating problem.

Red Hat

Centos 8 – where did Lynx go ?

It’s always fun when you build a system at a new OS level and things have moved around. But having Lynx disappear made me a #sadpanda.

Fortunately, it wasn’t far away – it’s been moved to the PowerTools repository which you can enable with a quick:

dnf config-manager --set-enabled PowerTools

Then you can install my favourite little text based web browser again.

Hyper-V Red Hat

Hyper-V meet RHEL8 – screen resolution

I’m running Hyper-V on my laptop and I’m also doing work with RHEL 8 desktops. Alas, the default screen resolution you get is the rather odd 1152×864.

In order to make this more reasonable, such as the 1920×1080 full screen resolution on my laptop you have to set the hyper-v framebuffer at boot time.

sudo grubby --update-kernel=ALL --args="video=hyperv_fb:1920x1080"

You’ll likely need to do this after each kernel update.

May the full screen be with you.

Ansible Ansible Tower Red Hat

Ansible Tower – Local_Action + Sudo ?

There are many times when you run an Ansible playbook through Ansible Tower and you have to become a privileged user on the target system. This is business as usual for Ansible and Ansible Tower.

This is normally achieved by specifying become as part of your playbook, such as this snippet.

- name: Patch Linux
  hosts: all
  gather_facts: true
  become: true

Typically, as part of a patching playbook, you would reboot the system and wait for the reboot to finish using a code fragment like this :

 - name: Wait for server to restart
     module: wait_for
       host={{ ansible_ssh_host }}

This local_action inherits the become: true from the parent definition and this is where Tower starts to complain. Remember, with Ansible Tower, it’s the tower server itself where the local_action will run. You can expect to see something like this :

"module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n",

No, you SHOULD NOT enable the awx user to use sudo on the Tower system as the AWX service user is intentionally restricted from sudo operations. The best approach is to de-privilege the local_action. Fortunately, local_action has it’s own become capability so you can turn off the request for privileged access as you don’t need it.

The above code block is now :

 - name: Wait for server to restart
   become: false
     module: wait_for
       host={{ ansible_ssh_host }}

and the tower job template will execute without any errors.

vRA vRealize

vRA, where is my template?

So, you’ve automated everything related to your template creation. You use packer like a boss as part of your CI/CD toolchain. They’re automatically placed onto your vmware environment and you wait for a mystical event to occur where the templates become available to vRA so you can use them in blueprints…. you sigh

Yes, you know you can set the refresh for the inventory to an hour ….. an hour… OMG you’ll be watching cat videos and forget what you’re doing before that happens.

Yes, you also know that you can navigate the clicky clicky world of vRA and refresh the inventory on demand as part of the data collection tasks. Sadly Jenkins is a little unwilling to clicky click and demands programmatic access (I know the REST api would be better for this use case, please humour me).

Well, Cloudclient comes to the rescue. In a previous post I introduced you to Cloudclient, a CLI interface to vRA.

The thing to note here is that the compute resources are managed by the IaaS servers and not the vRA appliance itself. Since you’ll be asking the IaaS server to do something (refresh the inventory) you’ll need to ensure your Cloudclient session is logged into the IaaS infrastructure.

vra login iaas --server {iaas-server-vip} --domain {domain} --user {user} --password {password}
vra computeresource list
vra computeresource datacollection start --name {resource-name} --waitforcompletion yes

You’re a wizard!