ThorneLabs

Install Ansible, Create an Inventory File, Create and Run an Ansible Playbook, and Run Ansible Commands

• Updated March 17, 2019


Ansible is part of the configuration management and orchestration family that includes Puppet, Chef, and SaltStack. Having only ever used Chef, I found Ansible to have a much lower learning curve. I spent more time using it rather than learning it. But, despite its ease of use, there is always some amount of pre-work needed to get started.

In this post I will be stepping through how to install Ansible, create and run your first Ansible Playbook, and how Ansible can be used to run ad hoc commands. I will be running everything from OS X Mavericks. With the possible exception of the installation, all the other steps should work on modern Linux distributions.

Install Ansible

First, install pip, the Python package management system:

sudo easy_install pip

Using pip, install Ansible:

sudo pip install ansible

Using pip, you can also upgrade Ansible:

sudo pip install --upgrade ansible

You’re now ready to begin using Ansible.

Create Your Ansible Directory

Create a directory somewhere to store your Ansible environment:

mkdir -p ~/Development/ansible-personal-servers/

Everything is self-contained within this directory. You will be working from this directory for the remainder of the post, so change into it:

cd ~/Development/ansible-personal-servers/

Create a file named hosts to be your Inventory File:

touch ~/Development/ansible-personal-servers/hosts

Open the hosts file with your favorite text editor and add entries for every server you want to manage with Ansible (the Inventory File is highly configurable, see the Ansible documentation for more information):

server1.example.com
server2.example.com

If server1.example.com and server2.example.com are resolvable via DNS, you are good to go. Otherwise, you will need to add IP addresses to each entry:

server1.example.com ansible_ssh_host=10.0.0.1
server2.example.com ansible_ssh_host=10.0.0.2

You can group together your servers, and you can add as many groups as you like. For example, if server server1.example.com is in Chicago and server server2.example.com is in New York, you can group them like so:

server1.example.com
server2.example.com

[chicago]
server1.example.com

[newyork]
server2.example.com

With these groups in place, you can run arbitrary commands or Ansible Playbooks against only the servers in a particular group.

Create an Ansible Playbook

There are all sorts of already made community Ansible Playbooks available at Ansible Galaxy. However, to really understand how all of this works, you need to make your own from scratch.

You will create an Ansible Playbook named bootstrap.yml that will do the following tasks:

  • Change the root user’s password
  • Create the user remote
  • Set the remote user’s password
  • Upload your workstation’s SSH public key to the remote user
  • Add the remote user to the sudoers file
  • Disallow root SSH access
  • Disallow SSH password authentication
  • Disallow SSH GSS API authentication

First, create two directories: files and playbooks. Any files you want pushed to your Ansible managed servers will be stored in the files directory. Ansible Playbooks will be stored in the playbooks directory:

mkdir files

mkdir playbooks

Next, copy your SSH public key from your workstation to the files directory:

cp ~/.ssh/id_rsa.pub files/workstation.pub

Then, create a file named bootstrap.yml:

touch playbooks/bootstrap.yml

Now open bootstrap.yml with your favorite text editor and paste in the following contents. (Instructions on how to hash the user passwords)

---
- hosts: all
  vars:
    - root_password: 'HASHED_PASSWORD'
    - remote_password: 'HASHED_PASSWORD'
 
  tasks:
  - name: Change root password
    user:
      name=root
      password={{ root_password }}
 
  - name: Add user remote
    user:
      name=remote
      password={{ remote_password }}
 
  - name: Add SSH public key to user remote
    authorized_key:
      user=remote
      key="{{ lookup('file', "../files/workstation.pub") }}"
 
  - name: Add remote user to sudoers
    lineinfile:
      "dest=/etc/sudoers
      regexp='^remote ALL'
      line='remote ALL=(ALL) NOPASSWD: ALL'
      state=present"
 
  - name: Disallow root SSH access
    lineinfile:
      dest=/etc/ssh/sshd_config
      regexp="^PermitRootLogin"
      line="PermitRootLogin no"
      state=present
    notify:
      - restart sshd
 
  - name: Disallow SSH password authentication
    lineinfile:
      dest=/etc/ssh/sshd_config
      regexp="^PasswordAuthentication"
      line="PasswordAuthentication no"
      state=present
    notify:
      - restart sshd
 
  - name: Disallow SSH GSS API authentication
    lineinfile:
      dest=/etc/ssh/sshd_config
      regexp="^GSSAPIAuthentication"
      line="GSSAPIAuthentication no"
      state=present
    notify:
      - restart sshd

  handlers:
  - name: restart sshd
    service: 
      name=sshd
      state=restarted

Save the Ansible Playbook. You are now ready to run it.

Run the Ansible Playbook

I’m going to assume the servers you have provisioned, and want to manage with Ansible, are very minimal and the only way to connect to them initially is through SSH as the root user using a password.

If you try to connect to these servers with Ansible for the first time, the SSH connection will fail because each server’s SSH fingerprint is not in your workstation’s known_hosts file. To get each server’s SSH fingerprint into your workstation’s known_hosts file requires manual intervention which defeats the purpose of automation.

One way to keep this from happening is to turn off host key checking. However, this is terribly insecure and not recommended. If a server’s SSH fingerprint changes, you will not be made aware of it and you could be connecting to a compromised server.

A better, and more programmatic, option to initially get a server’s SSH fingerprint into your known_hosts file is to use the ssh-keyscan command. Use the following commands to do this:

ssh-keyscan server1.example.com >> ~/.ssh/known_hosts

ssh-keyscan server2.example.com >> ~/.ssh/known_hosts

If you have a lot of servers to run ssh-keyscan on, you could put all the server hostnames - assuming they can be resolved via DNS - into a text file and loop through them using a bash for loop:

for i in $(cat hostnames.txt)
do
ssh-keyscan $i >> ~/.ssh/known_hosts
done

Now you can run the Ansible Playbook you created above by SSH’ing to the servers as the root user using a password. Take note, for this to work properly, each server you are connecting to should have the same root password. If you encounter an sshpass error, you might need to install the sshpass program.

ansible-playbook -i hosts playbooks/bootstrap.yml --user root --ask-pass

Assuming the Ansible Playbook completed successfully, your servers will have had the following changes done:

  • Changed the root user’s password
  • Created the remote user
  • Set remote user’s password
  • Uploaded your workstation’s SSH public key to the remote user
  • Added the remote user to the sudoers file
  • Disallowed root SSH access
  • Disallowed SSH password authentication
  • Disallowed SSH GSS API authentication

Now that each server has the remote user setup with your SSH public key, subsequent Ansible commands will use the remote user, instead of root, and sudo to connect and make changes:

ansible-playbook -i hosts playbooks/bootstrap.yml --user remote --sudo

Run Ansible Commands

In several of the following commands you will see --user remote --sudo added to the command. These command line switches are not needed if the user you are logged in as on your workstation is the same user you created and login with on the target servers.

List All Hosts in the Inventory File

A quick way to get a list of all the servers Ansible is aware of:

ansible -i hosts all --list-hosts

See All Ansible Gathered Facts for a Particular Server

Each time Ansible is run, it gathers all sorts of information. This information is used during Ansible Playbook runs. Run the following command to see what information, also called facts, Ansible gathers for a particular server:

ansible -i hosts -m setup HOSTNAME

For example, see all gathered facts on server1.example.com:

ansible -i hosts -m setup server1.example.com --user remote --sudo

Execute Arbitrary Commands On Servers

Execute a command on a particular group in your Inventory File:

ansible -i hosts GROUP -m shell -a "uptime"

For example, execute a command on all servers:

ansible -i hosts all -m shell -a "uptime" --user remote --sudo

Another example, execute a command on servers in group chicago:

ansible -i hosts chicago -m shell -a "uptime" --user remote --sudo

Execute a command on one server in your Inventory File:

ansible -i hosts HOSTNAME -m shell -a "uptime"

For example, execute a command on server1.example.com:

ansible -i hosts server1.example.com -m shell -a "uptime" --user remote --sudo

Other Useful Tips

As mentioned above, the directory you created to store your Ansible environment is self-contained. You could create another directory to store a completely different Ansible environment.

However, if you plan on only having one Ansible directory, you can add the ANSIBLE_HOSTS environment variable that points to your Ansible Inventory File to your ~/.bash_profile so you no longer have to reference it with the -i hosts command line switch in the Ansible commands:

echo "export ANSIBLE_HOSTS=~/Development/ansible-personal-servers/hosts" >> ~/.bash_profile

Close and re-open your terminal application, or re-source .bash_profile with source ~/.bash_profile, for this to take affect.