
1 Introduction
Ansible is one of the most popular systems for remote server configuration management.
1.1 Benefits of using Ansible
Ansible is used to automate high number of servers say in a datacenter
Ansible scripts are easier to maintaining than shell scripts and are easier to scale up
Ansible makes it easier to to deploy a desired state as opposed to a shell script (which can go wrong and the recovery becomes difficult to manage)
Ansible makes it easier to work across different Linux distributions (as opposed to shell scripts)
Ansible is a configuration management tool. Once a desired state is defined through Ansible, if the current state changes, just reapply the desired state
Ansible is easier than other solutions such as Puppet, Chef, Salt, CFEngine etc.
Ansible does not need an agent on managed servers. It uses SSH
Ansible is modular and therefore flexible
Ansible modules are usually written in Python which is a highly popular language. There are over 100 modules already available to administrators
1.2 Installing Ansible
1.2.1 Minimum Requirements
One server acting as the Controller Node and one server acting as a Managed Host. RHEL 7.3 or CentOS 7.3 (or above) are the recommended platforms.
1.2.2 Installing the Controller Node
In order to install the controller node you need to install Python 2.x. The Ansible version that comes with the repos for 7.3 supports Python 2.x. If you are using a later version of Ansible (say 2.4 or later) then Python 3 is supported.
You need to add the EPEL repo since this is the one that contains Ansible. You will also need to create a non-root user which is used to perform all Ansible tasks.
1.2.3 Installing the Managed Noded
The requirements are the same as the controller node. You need to install Python. You will also need to set up SSH communications.
1.2.4 Configuring SSH
Set up SSH Key-based authentication using ssh-keygen. This creates a public key as well as a private key. The server that has the public key sends a challenge that can only be answered with the private key. The private key should be kept in the local user account on the control node. The public key should be sent to the ~/.ssh/authorzed_keys file in the target user home directory. To transfer the public key use the command ssh-copy-id user@remotehost. Notice that the local user name and the remote user name do NOT have to be the same, but it is convenient to have such a setup.
Note: if you also want to use Ansible to manage the controller node, run the same command locally to copy the public key to ~/.ssh/authorzed_keys. In this case the command would be like this: ssh-copy-id user@localhost
1.3 Managing Inventory
After installation is complete, you can use Ansible against remote hosts. The remote hosts that need to be managed must be defined in an inventory file (Please note the inventory file is NOT similar to the hosts file found under /etc/hosts. It is set up completely differently. This file is very specific to Ansible). The hosts are registered in the inventory file using their Fully Qualified Domain Name (FQDN) or IP address. The hosts may be registered more than once but in different logical groups. This means that you can address different groups to manage desired state configurations and a host may be a member of more than one logical group which is quite normal. When running Ansible commands you mention both host names as well as the inventory file you intend to use. you many have more than one inventory file.
For example:
ansible server1.cybg.com, server2.cybg.com -i myinventory –list-hosts |
You would usually create an Ansible project directory in your home directory and put the inverntory file in it. It is uncommon that you would use only one inventory in the whole Ansible Environment. You might have many administrators accessing Ansible and they would in turn have a number of inventory files to manage their servers.
1.4 Hands on
1.4.1 Installing Ansible
Log onto the controller node
Sudo to root
su - Password: |
Enter the password.
Enter the command:
visudo |
Search for the group definition wheel. You should see a line similar as follows :
%wheel ALL=(ALL) ALL |
Note the line must be active as above (There is no remark , no # at the front of the line). This means that members of wheel will be allowed to use the sudo command and perform administrator priviliged tasks. This way there is no need to use the root account.
Now assuming you have created a user called user on both the Controller node and the managed host, add the user to the wheel group on both machines:
usermod -aG wheel user |
Now we move onto installing python since this is a prerequisite for Ansible. On both the controller node and the managed host run the following command:
yum install -y python2 epel-release |
Now lets install Ansible on the controller node:
yum install -y ansible |
If you do not have FQDN set up on these machines you might want to set up the naming using /etc/hosts file. For example on both controller node and manged host in /etc/hosts you can put in the following lines:
192.168 . 4.200 controller.cybg.com controller 192.168 . 4.201 managedhost1.cybg.com managedhost1 |
1.4.2 Setting up SSH
On the controller node log out from root and log back in as user. Remember this user can now run sudo tasks.
Next we need to make sure that the remote host key is cached on the controller. On the controller node type in the following:
ssh managedhost1 |
you will be prompted that the authenticity of this host can’t be established. ECDA key fingerprint is xxxxxxxxxxxx. are you sure you want to continue connecting (yes/no)?
Type in yes and proceed and type in the password to finish caching the key to the local configuration file, then close the ssh connection. Since Ansible heavily relies on SSH we do not want Ansible to prompt for public key verifications half way through a set of commands.
Now we move onto generating the public / private key for the controller node. Type in the following:
ssh-keygen |
Accept all the defaults, no need to change the default file name or enter any passphrase at this stage. Under /home/user/.ssh you will find 2 new files. The private key id_rsa and the public key id_rsa.pub. Now we copy the public key to managedhost1. Type the following:
ssh-copy-id managedhost1.cybg.com |
Type in yes to confirm you want to proceed on authentication. Type in the password when prompted. Notice if you use only the hostname and then the fully qualified domain name you will be prompted again to accept the name and store it in the cache. So now you have 2 names in the cache on the controller node. Notice if you are going to perform tests using the all group defined in the inventory file, then you need to at least ssh once also to controller and controller.cybg.com. Essentially this must be done for every hostname and FQDN listed in a group in an inventory file, to avoid authentication and authorization prompts.
So repeat the ssh copy command to the controller node so it is also set up with a public key.
ssh-copy-id controller.cybg.com |
So at this point we want to create a project directory for this exercise. Whenever a new Ansible project is in the works, usually a project directory is created to compartimentalize the work. Type the following on the controller node:
mkdir install cd install |
Next we create an inventory file called inventory with the following content:
[all] controller.cybg.com managedhost1.cybg.com |
So the above content is defining a group called all which contains 2 hosts, controller.cybg.com and managedhost1.cybg.com. Now we are in a position to issue Ansible commands. Type the following:
ansible all -i inventory --list-hosts hosts ( 2 ): controller.cybg.com managedhost1.cybg.com |
So the above command is stating select the group all found defined in the file inventory which is relative to the current directory and issue the command –list-hosts which prints out a list of all the hosts defined in the said group.
1.5 Creating the Ansible Config File
When working with Ansible you need to pass a set of configuration options and these can be held in an Ansible configuration file. The file is called ansible.cfg and can be found in different locations on the controller node. For example:
- The generic file /etc/ansible/ansible.cfg
- The user specific file ~/ansible.cfg
- The ansible.cfg file in the project directory (takes precendence)
It is common practice to use an ansible.cfg firle in the project directory , anlternatively you can specify exacctly which ansible.cfg file by defining the $ANSIBLE_CONFIG environment variable.
It is very important that the ansible.cfg file that is used contains all environment variables.
To find out which ansible.cfg file is being used run the command ansible -v
In the ansible.cfg the following entries must be specified:
become | specifies how to escalate privileges on managed hosts (for example use sudo) |
become_user | specifies which user account to use on the remote host (for example a generic username used by ansible set up the same on all managed hosts) |
become_ask_pass | use this to determine whether or not a password should be asked when becoming another user |
inventory | specifies which inventory file to use |
remote_user | specifies the name of the user account on the managed machine. This is not set by default which results in the local user name being used if not specified |
1.6 Understanding Privilege Escalation
- Ansible runs tasks on the managed hosts with the same user account as the local user. Make sure that the SSH keys are copied to the user’s SSH config on the remote host
- Set remote_user in ansible.cfg to specify another user to be used
- If remote_user is not specified, privilege escalation can be used
- Enable in the [privilege_escalation] section in ancible.cfg as shown below
become=True become_method=sudo become_user=root become_ask_pass=False |
The above settings ensure that tasks are run using the root user through the sudo command and no password prompts will appear to stop a task from proceeding.
- Privilege escalation needs a sudo configuration in order for it to work. For the control node Ansible default account, create a sudo file on all the Ansible managed hosts (including the controller node). A typical sudo file for the user user would be created in /etc/sudoers.d/user:
cat /etc/sudoers.d/user user ALL=(ALL) NOPASSWD: ALL |
1.6 Running Ansible ad-hoc commands
Ad-hoc commands are Ansible commands that you can run on the command line. This is not typically how you want to use Ansible. You’ll typically want to create playbooks to automate tasks against multiple ansible servers. To quickly make changes ot many managed hosts, ad-hoc commands are convenient. Ad-hoc commands can also be used for diagnostic purposes, like querying a large number of hosts. In ad-hoc commands, modules are typically used.
1.6.1 Understanding Modules
- A module is used to accomplish specific tasks in Ansible.
- Modules can run with their own specific arguments. They are written in Python.
- Modules are specified with the -m option followed by the name of the module.
- Module arguments are referred to with the -a option followed by the argument name.
- The default module can be set in ansible.cfg. It is predefined to the command module
- This module allows you to run random commands against managed hosts. These are random linux commands.
- As command is the default module, it does not have to be referred to using the -m module
- Notice that the command module is not interpreted by the shell on the managed host and for that reason cannot work with variables, pipes and redirects
- consider using the shell module if you need full shell functionality.
1.6.2 3 Common Modules
command | runs a command on a manged host |
shell | runs a command on managed host through the local shell |
copy | copy a file, change content on a remote host in a target file |
1.6.3 Ad-hoc Comand Examples
ansible all -i inventory -m command -a id | Runs the command module with the id command as its argument against all hosts defined in the group [all] in the inventory file. |
ansible all -i inventory -m command -a id -o | Runs the same command as above, but provides one single line of output |
ansible all -i inventory -m command -a env | This one fails as the command module is refering to a internal shell command and this module doen not work through the shell |
ansible all -i inventory -m shell -a env | Same as above but using the shell module and is successful |
ansible managedhost1.cybg.com -m copy -a ‘content=”Managed by Ansible\n” dest=/etc/motd’ | Runs the copy module on against a specific managed host and copies a line to /etc/motd on the managed host |
1.7 Adding a Managed Host
Please follow the checklist / steps below to ensure a successful addition of a managed host to your Ansible setup.
- Make sure the Ansible user account that you are using on the other managed hosts exists
- Copy the ssh keys for the Ansible user account to the remote host
- Log in once, using SSH so that the remote host public key is stored on the controller host
- Create a sudo configuration on the managed host
- Install python and ansible packages
- Update the inventory file
2 Ansible Architecture
2.1 Overview
The Ansible architecture is mainly composed of 3 components:
Managed hosts, running SSH | These hosts must have python 2.7 or later |
Controller node: where the playbooks are created | This is where inventory file, ansible.cfg reside and python2.7 or later is installed |
Playbooks: which are the scripts to be executed | Jinja2, which can be used to modify playbooks
Modules written in Python which provide customized functionality Plugins which are enhancements to Ansible functionality (like email alerts etc.) |
2.2 Ansible and Windows
- Windows is also supported, but using powershell, not SSH
- Powershell scripts can be pushed and executed
- all Windows features can be managed
2.3 Running Ansible Deployments
In real life in a server farm environment a new server gets installed with some initial tool like Kickstart or similar. Once installed , Ansible can be used to finish the configuration and take care of multiple tasks, according to the needs on the server for example:
- Configuration of software repositories
- Applicaton installation
- Configuration file modifications
- Opening ports in the firewall
- Starting services
2.4 Connection Plugins
Ansible uses plugins to extend what the system is doing under the hood. Connection plugins provide specific communication between different elemets such as:
- native SSH
- paramiko SSH
- local
- chroot
- docker
2.5 Understanding Modules
Modules are programs that Ansible runs to perform tasks on managed hosts.They are included in playbooks, or they are referred to when running ad-hoc commands. Ansible comes with hundreds of modules and administrators can write custom modules (with Python). Core modules are included with Ansible and maintained by the Ansible developers. Extra modules are additional modules that are maintained by the community. This may also include external communities, such as OpenStack. Custom modules are the modules that administrators develop. Module location depends on the Linux distribution. On CentOS the are in /ust/lib/python2.7/site-packages/ansible/modules. The best place to look for documentation on the modules is to refer to the authoritative documentation on docs.ansible.com.
You can also find information by using the command:
ansible-doc -l |
To get a module specific information use the following command:
ansible-doc <modulename> |
modules can be included using the ansible -m <modulename> command. For example:
ansible -m ping all |
where all is the group defined in the inverntory file.
Modules can also be included in an Ansible task in a playbook. For example:
tasks: - name: Install a package yum: name: vsftpd state: latest |
3 Working with Playbooks
3.1 Understanding YAML
YAML stands which stands for “YAML Ain’t Markup Language” is a serialization standard that was developed to represent data structures in an easily readable way. Structures are represented using indentation, and not by using braces and brackets or opening and closing tags which is the case in many other serialization standards. Space characters are used for indentation, the only requirement is that data elements at the same level in the hierarchy must have the same indentation. Do NOT use tabs for indentation! It is common, but not required, to start a YAML file with three dashes, and to end it with three dots. This allows you to include YAML code in other files.
3.2.1 Sample YAML File
--- # example YAML file item1 parameter1 parameter2 option1 option2 item2 parameter3 ... |
Note: The indentation is 2 spaces long.
3.2.2 YAML File contents
Typically in a YAML file you will define a dictionary. A dictionary is a key/value pair, written in key:value notation. You can also use lists which are used to represtnt a list of items. Lists are enumerated as – item. A space behind the – is mandatory. You can use strings as well which can be enclosed in either double or single quotes but they are not mandatory. In a multi-line string, the first line is ended by either a | or a > and the next lines are indented. To verify YAML file syntax, run it with ansible-playbook –syntax-check mycode.yaml.
3.2 Creating Playbooks
3.2.1 Playbook Structure
A playbook is a collection of plays. Each play defines a set of tasks that are executed on the managed hosts. Tasks are performed using ansible modules. Ordering is important: Plays and tasks are executed in the order they are presented. A playbook defines a desired state. Ansible playbooks are idempotent. This means that playbooks will not change anything on a managed host that already is in the desired state. Avoid using modules like command, shell and raw as the commands they use are not idempotent by nature. Multiple playbooks may be defined, each playbook will have its own YAML file.
3.2.2 Playbook Contets: The Task Attribute
There are different types of attributes that may be used, depending on the included Ansible modules. the most important attibute is the taskattribute:
tasks: - name: run servise service: name=vsftpd enabled= true |
In the above example, the – marks the beginning of a list of attribues. The service item is indented at the same level, which identifies it as another task that is at the same level as name. If multiple tsks are defined, each first attiebute of the task starts with a –.
3.2.3 Playbook Contets: Other Attributes
These are the most common generic attributes:
name: | Used to specify a specific label to a play |
hosts: | Uses patterns to define on which hosts to run a play |
remote_user: | Overwrites the remote_user setting in ansible.cfg |
become: | Overwrites the become setting in ansible.cfg |
become_method: | Overwrites the become_method setting in ansible.cfg |
become_user: | Overwrites the become_user setting in ansible.cfg |
3.2.4 Formatting Playbooks
If playbooks are getting larger, playbook formatting becomes more important to increase readability. Imagine a module that is invoked with multiple arguments. Multi-line formatting allows you to specify all arguments as multiple lines. Dictionary formatting specifies all arguments on differnet lines, using indentation. Block formatting allows you to group tasks. As you can see there are multiple ways a playbook can be formatted.
3.2.5 Running Playbooks
Playbooks are executed with ansible-playbook
- Use ansible-playbook –syntax-check simple.yml for a syntax check
- Use ansible-playbook -C simple.yml for a dry run
- Use ansible-playbook –step simple.yml for a step-by-step execution
where simple.yml is an example playbook
4 Working with Variables, Inclusions and Task Control
4.1 Working with Variables
Using variables makes it easier to repeat tasks in complex playbooks and are convienient for anything that needs to be done multiple times such as creating users, removing files, installing packages etc. These are typically tasks where you do not want to define the specific names of users, files or packages. A variable is a label that can be referred to from anywhere in the playbook, and it can contain different values, referring to anything. Variable names must start with a letter and can contain letters, underscores and numbers. Other characters (i.e. – or # etc) are not valid characters.
Variables can be defined at different levels. They can be defined with a different scope such as:
- Global scope: these variables are set from the command line of from the Ansible configuration file
- Play scope: these variables relate to the play and related structures
- Host scope: variables are defined on groups and individual hosts (this can be done through an inventory file)
When using multiple levels of conflicting variables, the higher level wins (so global scope wins from host scope).
Variables can be defined in a playbook or included from external files. See example below:
- hosts: all vars: user: linda home: /home/linda |
When using variable files, a YAML file nees to be created to contain the variables. The file uses a path relative to the playbook path. This file is called from the playbook, using vars_files:
- hosts: all vars_files: - vars/users.yml |
Looking at the contents of inside the project directory vars/users.yml:
user: linda home: /home/linda user: anna home: /home/anna |
4.1.1 Using Variables
To use variables in a playbook, the variable is referred to using double curly braces. If the variable is used as the first element to start a value, you need to use double quotes around the curly braces too. See example below:
tasks: - name: Creates the user {{ user }} user: name: "{{ user }}" |
Notice the different uses of the variable user.
4.2 Managing Host Variable and Group Variables
A host variable is a variable that applies to one host that is defined in the inventory file. A group variable applies to multiple hosts as defined in a group in the inventory file. These variables may be defined in the inventory file, but that method is deprecated. The recommended method is to use group_vars and host_vars directories. So within the project directory, which contains the inventory file, create directories group_vars and host_vars.
If for example you have a host group called webservers that is defined in the inventory file, create a file with the name group_vars/webserversand in that file define the variable. This is the same for individual host variables, create a file with the name of the host and put it in host_vars.
At any time, variables can be overwritten from the command line using the -e “key=value” command line option to the ansible-playbookcommand.
4.2.1 Demo
Create a directory vardemo.
Under vardemo create an inventory file inventory with the following contents:
[webservers] server1.example.com server2.example.com [ftpservers] server3.example.com server4.example.com |
Under vardemo create a subdirectory group_vars. Under vardemo/group_vars create a file webservers with the following variables:
package : httpd |
Under vardemo create a subdirectory group_vars. Under vardemo/group_vars create a file ftpservers with the following variables:
package : vsftpd |
Now with this setup you can define a generic playbook that uses the variable package that will load the httpd string when used in relation to the webservers group and vfstpd string when used in relation to the ftpservers group. This way we are splitting static content from dynamic content making playbook maintenance and reuse easy.
4.3 Understanding Arrays
An array is a variable that defines multiple values, including specific properties. You refer to a cell using dot notation for example from users.linda.first_name would be defined as follows in vars/users.yml:
users: linda: first_name: linda last_name: thomsen home_dir: /home/linda anna: first_name: anna last_name: jomes home_dir: /home/anna |
4.4 Understanding Facts
A fact contains discovered information about a host. Facts can be used in conditional statements to make sure certain tasks only happen if they are really necessary. The setup module is used to gather fact information. You can run Ansible on a host to gather the facts. For example:
ansible managed1.ansible.local -m setup |
Facts provide a lot if information. Filters can be applied on the level 1 information that is displayed by the facts. Level 1 is the first indentation level as shown when displaying facts. To limit, use a filter by passign the -a ‘filter=…” option for example:
ansible -i inventory managed1.ansible.local -m setup -a 'filter=ansible_kernel' |
The result of the filter can then be used to assess conditionals. Say for example in a playbook, proceed to the next step and install a package provided the kernel version is above a certain level.
4.4.1 Defining Custom Facts
You can also create and work with custom facts. Custom facts can be created by administrators to display information about a host. Custom facts must be defined in a file using the INI or JSON format and the .fact extension. The fact files must be stored in the /etc/ansible/facts.d directory and will be shown as an “ansible_local” fact. Below is an example of a .fact file:
[server_info] profile = web_server |
You expose facts using the following ansible command and filter:
ansible managed1.ansible.local -m setup -i inventory -a 'filter=ansible_local' |
4.5 Using Inclusions
Inclusions makes it easy to create a modular Ansible setup. Tasks can be included in a playbook from an external YAML file using the includedirective. Using task inclusion makes sense in complex setups, as it allows for the creation of separate files for different tasks, which can be managed independently. If task inclusions are used, the main variables would need to be set in the master ansible file, whereas the generic tasks will be defined in the included files. Variables can be included from a YAML or JSON file using the include_vars directive. Using this method overrides any other method to define variables. If you want to do this, make sure the include_vars happens before the actual usage of the variables. Notice that all these different ways of working with variables can make it difficult to find out which is going to be effective.
4.5.1 Variable Precedence
With all these different methods of defining variables, it is good to know about precedence. Check the following order:
- Variables defined with include_vars
- Variables with a global scope (set from the command line or Ansible configuration file)
- Variables defined by the playbook
- Varables defined at the host level
5 Using Flow Control, Conditionals and Jinja2 Templates
5.1 An Introduction to Flow Control
Flow control works with loops and conditionals to process items. A loop is used to process a series of values in an array (like creating multiple users, installing multiple packages etc.) A conditional is a task that is executed only if specific conditions ate met (for example using implemented facts like “{{min_memory}} < 128”)
5.1.2 Understanding Loop Types: Simple Loops
A simple loop is just a list of items that is processed through the with_items statement for example:
- yum: name: "{{ item}}" state: latest with_items: - nmap - net-tools |
Note: The item variable in the above example follows from the with_items loop type.
A more complex items list defined in the multi array below:
- name: create users hosts: all tasks: - user: name: "{{ item.name }}" state: present groups: "{{ item.groups }}" with_items: - { name: 'Linda' , groups: 'wheel' } - { name: 'Lisa' , groups: 'root' } |
5.1.3 Understanding Loop Types: Nested Loops
A nested loop is a loop inside a loop. When these are used, Ansible iterates over the first array, and applies the values in the second array to each item in the first array. This is useful if a series of tasks needs to be executed on items in the array. For example:
- name: give users access to multiple databases mysql_user: name: "{{ item[0] }}" priv: "{{ item[1] }}.*:ALL" append_privs: yes password: "foo" with_items: - [ 'Linda' , 'Lisa' ] - [ 'cliensdb' , 'employeedb' , 'profiderdb' ] |
5.1.4 Understanding Other Loop Types
Ansible supports other looping types; for a full list see the documentation at docs.ansible.com/ansible/latest/playbooks_loops.html. See below a list of loop types:
with_file: | Evaluates a list of files |
with_fileglob: | Evaluats a list of files based on a globbing pattern |
with_sequence: | Generates a squence of items in increasing order |
with_random_choice: | Rakes a random item from a list |
5.2 Understanding Conditionals
Conditionals can be used to run tasks on hosts that only meet the specific conditions. In conditionals, operators are used such as string comparison, mathematical operators and booleans. Conditionals can look at different items for validation. For example, values of registered variables, Ansible facts and output of commands. In conditionals, different operators are used. See the table below:
Equal | == | “{{ max_memory }} == 1024” |
Less than | < | “{{ min_memory }} < 128” |
Greater than | > | “{{ min_memory }} > 256” |
Less than or equal to | <= | “{{ min_memory }} <= 512” |
Greater than or equal to | >= | “{{ min_memory }} >=1024” |
Not equal | != | “{{ min_memory }} != 512” |
Variable exists | is defined | “{{ min_memory }} is defined” |
Variable does no exist | is not defined | “{{ min_memory }} is not defined” |
Variable is set to yes, true or 1 | “{{ available_memory }}” | |
Variable is ser to no, false or 0 | not | “not “{{ available_memory }}” |
Value is present in a variable or array | in | “{{ users }} in users[“db_admins”]” |
5.2.1 Using the When Statement
The when statement is used to implement a condition. For example:
- name: Install the mariadb package package : name: mariadb when: inventory_hostname in groups[ "databases" ] |
Multiple conditions can be combined with the and and or keywords, or grouped with parenthesis. For Example:
{{ ansible_kernel == 3.10 . 0 - 514 .el7.x86_64 }} and {{ ansible_distribution = CentOS }} |
not {{ ansible_apparmor }} and ansible_distribution = SuSE |
5.3 Understanding Jinja2 Templates
Jinja2 temlates are Python-based templates that are used to put host specific data on hosts, using generic YAML and Jinja2 files. Jinja2 templates are used to modify YAML files before they are sent to the managed host. Jinja2 can also be used to reference variables in playbooks. As advanced usage, Jinja2 loops and conditionals can be used in templates to generate very specific code. The host specific data is generated through variables or facts.
Below is an example of a jinja2 template, called motd.j2
This is the system {{ ansible_hostname }}. Today it is {{ ansible_date.date }}. Only use this system if {{ system_owner }} has granted you permission |
The variables referred to above are in a YAML file in this case MOTD.yml. This YAML file is in turn referring to the motd.j2 template
--- - hosts: all user: user become: true vars: system_owner: anna @example .com tasks: - template: src: motd.j2 dest: /etc/motd owner: root group: roor mode: 0644 |