Task 1: Ansible Installation and Configuration: Task 2: Ad-Hoc Commands

You might also like

Download as odt, pdf, or txt
Download as odt, pdf, or txt
You are on page 1of 17

Task 1: Ansible Installation and Configuration

Install ansible package on the control node (including any dependencies) and configure the
following:
1. Create a regular user automation with the password of devops. Use this user for all sample
exam tasks.
2. All playbooks and other Ansible configuration that you create for this sample exam should
be stored in /home/automation/plays.
Create a configuration file /home/automation/plays/ansible.cfg to meet the following requirements:
1. The roles path should include /home/automation/plays/roles, as well as any other path that
may be required for the course of the sample exam.
2. The inventory file path is /home/automation/plays/inventory.
3. Privilege escallation is disabled by default.
4. Ansible should be able to manage 10 hosts at a single time.
5. Ansible should connect to all managed nodes using the cloud_user user.
Create an inventory file /home/automation/plays/inventory with the following:
1. ansible2.hl.local is a member of the proxy host group.
2. ansible3.hl.local is a member of the webservers host group.
3. ansible4.hl.local is a member of the webservers host group.
4. ansible5.hl.local is a member of the database host group.

# Solution Task1
cat inventory
[proxy]
77726771c.mylabserver.com
[webservers]
77726772c.mylabserver.com
77726773c.mylabserver.com
[database]
77726774c.mylabserver.com
[prod:children]
database
cat ansible.cfg
[defaults]
roles_path = ./roles
inventory = ./inventory
remote_user = cloud_user
forks = 10
[prvilege_escalation]
become = False

Task 2: Ad-Hoc Commands


Create an SSH keypair. Write a script /home/automation/plays/adhoc that uses Ansible ad-hoc
commands to achieve the following:
User automation is created on all inventory hosts.
SSH key (that you generated) is copied to all inventory hosts for the automation user and stored
in /home/automation/.ssh/authorized_keys.
The automation user is allowed to elevate privileges on all inventory hosts without having to
provide a password.
After running the adhoc script, you should be able to SSH into all inventory hosts using the
automation user without password, as well as a run all privileged commands.
Reponse :

[automation@control plays]$ ssh root@node1.example.com


[automation@control plays]$ ssh root@node2.example.com
[automation@control plays]$ ssh root@node3.example.com
[automation@control plays]$ ssh root@node1
[automation@control plays]$ ssh root@node2
[automation@control plays]$ ssh root@node3
[automation@control plays]$ ssh-keygen -t rsa
[automation@control plays]$ ssh-copy-id -i ~/.ssh/id_rsa.pub
root@node1.example.com
[automation@control plays]$ ssh-copy-id -i ~/.ssh/id_rsa.pub
root@node2.example.com
[automation@control plays]$ ssh-copy-id -i ~/.ssh/id_rsa.pub
root@node3.example.com
[automation@control plays]$ ansible localhost -m debug -a "msg={{ 'devops' |
password_hash('sha512','mysalt') }}"

Task 3: File Content


Create a playbook /home/automation/plays/motd.yml that runs on all inventory hosts and does the
following:
1. The playbook should replace any existing content of /etc/motd with text. Text depends on
the host group.
2. On hosts in the proxy host group the line should be “Welcome to HAProxy server”.
3. On hosts in the webserver host group the line should be “Welcome to Apache server”.
4. On hosts in the database host group the line should be “Welcome to MySQL server”.

# Solution - Task 3
cat motd.yml
---
- name: Changing MOTD
hosts: all
become: yes
tasks:
- name: Copy the content to HAProxy
copy:
content: "Welcome to HAProxy server\n"
dest: /etc/motd
when: "'proxy' in group_names"
- name: Copy the content to Apache
copy:
content: "Welcome to Apache server\n"
dest: /etc/motd
when: "'webservers' in group_names"
- name: Copy the content to MySQL
copy:
content: "Welcome to MySQL server\n"
dest: /etc/motd
when: "'database' in group_names"
Task 4: Configure SSH Server
Create a playbook /home/automation/plays/sshd.yml that runs on all inventory hosts and configures
SSHD daemon as follows:
1. banner is set to /etc/motd
2. X11Forwarding is disabled
3. MaxAuthTries is set to 3

# Solution - Task 4
cat sshd.yml
---
- name: Change SSH configuration
hosts: all
become: yes
tasks:
- name: Change default banner path
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^Banner'
line: 'Banner /etc/motd'
- name: X11 Forwarding is disabled
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^X11Forwarding'
line: 'X11Forwarding no'
- name: MaxAuthTries is set to 3
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#MaxAuthTries'
line: 'MaxAuthTries 3'
- name: Restart the sshd service
service:
name: sshd
state: restarted
enabled: yes

- name: Check the Configuration


shell: "grep MaxAuthTries /etc/ssh/sshd_config; grep X11Forwarding
/etc/ssh/sshd_config; grep Banner /etc/ssh/sshd_config"
register: check_result
- name: Results
debug:
msg: "{{ check_result.stdout }}"[automation@control plays]$ ansible all -m
command -a 'cat /etc/ssh/sshd_config' -b
[automation@control plays]$ ssh automation@node1.example.com
Welcome to HAProxy server
Welcome to HAProxy server
Last login: Thu Sep 24 11:26:06 2020 from 192.168.122.50
[automation@node1 ~]$ exit
logout
Connection to node1.example.com closed.
[automation@control plays]$ ssh automation@node2.example.com

Task 5: Ansible Vault


Create Ansible vault file /home/automation/plays/secret.yml. Encryption/decryption password is
devops.
Add the following variables to the vault:
1. user_password with value of devops
2. database_password with value of devops
reponse
[automation@control plays]$ ansible-vault create secret.yml
New Vault password:
Confirm New Vault password:
---
user_password: devops
database_password: devops
[automation@control plays]$ vim vault-key
devops
[automation@control plays]$ chmod 400 vault-key
[automation@control plays]$ ansible-vault view secret.yml --vault-
password-file vault-key
user_password: devops
database_password: devops

Task 6: Users and Groups


You have been provided with the list of users below.
Use /home/automation/plays/vars/user_list.yml file to save this content.
---
users:
- username: alice
uid: 1201
- username: vincent
uid: 1202
- username: sandy
uid: 2201
- username: patrick
uid: 2202
Create a playbook /home/automation/plays/users.yml that uses the vault file
/home/automation/plays/secret.yml to achieve the following:
1. Users whose user ID starts with 1 should be created on servers in the webservers host
group. User password should be used from the user_password variable.
2. Users whose user ID starts with 2 should be created on servers in the database host group.
User password should be used from the user_password variable.
3. All users should be members of a supplementary group wheel.
4. Shell should be set to /bin/bash for all users.
5. Account passwords should use the SHA512 hash format.
After running the playbook, users should be able to SSH into their respective servers without
passwords.

# Solution - Task 6
[automation@control plays]$ mkdir vars
[automation@control plays]$ vim vars/user_list.yml
---
users:
- username: alice
uid: 1201
- username: vincent
uid: 1202
- username: sandy
uid: 2201
- username: patrick

---
- name: Create users
hosts: all
become: yes
vars_files:
- ./users_list.yml
- ./secret.yml
tasks:
- name: Ensure group is exist
group:
name: wheel
state: present
- name: Create users
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['webservers']
- "item.uid|string|first == '1'"
- name: Create users in database
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
uid: "{{ item.uid }}"
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['database']
- "item.uid|string|first == '2'"
Task 7: Scheduled Tasks
Create a playbook /home/automation/plays/regular_tasks.yml that runs on servers in the proxy host
group and does the following:
1. A root crontab record is created that runs every hour.
2. The cron job appends the file /var/log/time.log with the output from the date command.

# Solution - Task 7
---
- name: Scheduled tasks
hosts: all
become: yes
tasks:
- name: Ensure file exists
file:
path: /var/log/time.log
state: touch
mode: 0644
- name: Create cronjob for root user
cron:
name: "check time"
hour: "*/1"
user: root
job: "date >> /var/log/time.log"
Task 8: Software Repositories
Create a playbook /home/automation/plays/repository.yml that runs on servers in the database host
group and does the following:
1. A YUM repository file is created.
2. The name of the repository is mysql56-community.
3. The description of the repository is “MySQL 5.6 YUM Repo”.
4. Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
5. Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
6. Repository GPG check is enabled.
7. Repository is enabled.

# Solution - Task 8
---
- name: Software repositories
hosts: database
become: yes
tasks:
- name: Create msyql repository
yum_repository:
name: mysql56-community
description: "MySQL 5.6 YUM Repo"
baseurl: "http://repo.mysql.com/yum/mysql-5.6-
community/el/7/x86_64/"
enabled: yes
gpgcheck: yes
gpgkey: "http://repo.mysql.com/RPM-GPG-KEY-mysql"
autre reponse :
---
- name: Configure Repository
hosts: database
become: true
tasks:
- name: Configure MariaDB 10.5 Repository
yum_repository:
file: MariaDB-10.5
name: MariaDB-10.5
description: "MariaDB 10.5 YUM Repo"
baseurl: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/
enabled: true
gpgcheck: true
gpgkey: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/RPM-GPG-KEY-
MariaDB
state: present
- name: Check if MariaDB-10.5.repo file exists
stat:
path: /etc/yum.repos.d/MariaDB-10.5.repo
register: repo_result
- name: Print repo_result variable
debug:
var: repo_result
- name: Install the GPG Key for MariaDB 10.5 Repository
rpm_key:
key: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/RPM-GPG-KEY-
MariaDB
state: present
when: repo_result['stat']['exists']
- name: Ensure module_hotfixes = 1 is present in config file
lineinfile:
path: /etc/yum.repos.d/MariaDB-10.5.repo
line: "module_hotfixes = 1"
insertafter: 'enabled = 1'
state: present
when: repo_result['stat']['exists']
[automation@control plays]$ ansible-playbook --syntax-check repository.yml
[automation@control plays]$ ansible-playbook repository.yml
[automation@control plays]$ ansible database -m command -a 'cat
/etc/yum.repos.d/MariaDB-10.5.repo' -b
Task 9: Create and Work with Roles
Create a role called sample-mysql and store it in /home/automation/plays/roles. The role should
satisfy the following requirements:
1. A primary partition number 1 of size 800MB on device /dev/sdb is created.
2. An LVM volume group called vg_database is created that uses the primary partition created
above.
3. An LVM logical volume called lv_mysql is created of size 512MB in the volume group
vg_database.
4. An XFS filesystem on the logical volume lv_mysql is created.
5. Logical volume lv_mysql is permanently mounted on /mnt/mysql_backups.
6. mysql-community-server package is installed.
7. Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
8. MySQL root user password should be set from the variable database_password (see task
#5).
9. MySQL server should be started and enabled on boot.
10.MySQL server configuration file is generated from the my.cnf.j2 Jinja2 template with the
following content:
[mysqld]
bind_address = {{ ansible_default_ipv4.address }}
skip_name_resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Create a playbook /home/automation/plays/mysql.yml that uses the role and runs on hosts in the
database host group.

# Solution - Task 9
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles

cat mysql.ymlSolution:
[automation@control plays]$ mkdir roles

[automation@control plays]$ cd roles


[automation@control roles]$ anisble-galaxy init sample-mysql
[automation@control roles]$ vim ~/plays/roles/sample-mysql/tasks/main.yml
---
# tasks file for sample-mysql
- name: Create Primary partition on {{ hdd }}
parted:
device: "{{ hdd }}"
number: 1
part_start: 1MiB
part_end: 800MiB
part_type: primary
state: present
unit: MiB
- name: Create Volume group {{ vg_name }} using {{ hdd }}1
lvg:
pvs: "{{ hdd }}1"
vg: "{{ vg_name }}"
state: present
- name: Create a logical volume {{ lv_name }}
lvol:
lv: "{{ lv_name }}"
vg: "{{ vg_name }}"
size: 512M
state: present
when: lv_name not in ansible_facts['lvm']['lvs']
- name: Create xfs filesystem on {{ lv_name }}
filesystem:
dev: /dev/{{ vg_name }}/{{ lv_name }}
fstype: xfs
- name: Ensure Correct Capacity for {{ lv_name }}
lvol:
lv: "{{ lv_name }}"
vg: "{{ vg_name }}"
size: 512M
resizefs: true
- name: Mount the /dev/{{ vg_name }}/{{ lv_name }} on /mnt/mysql_backups
mount:
src: /dev/{{ vg_name }}/{{ lv_name }}
path: /mnt/mysql_backups
fstype: xfs
state: mounted
- name: Install {{ packages }}
yum:
name: "{{ packages }}"
state: present
notify: change db password
- name: Start and enable {{ mariadb_svc }} service
service:
name: "{{ mariadb_svc }}"
state: started
enabled: true
- name: Copy my.cnf.j2 to /etc/my.cnf
template:
src: my.cnf.j2
dest: /etc/my.cnf
notify: restart mariadb
- name: Enable mysql in firewalld
firewalld:
service: mysql
state: enabled
permanent: true
immediate: true
[automation@control roles]$ cat ~/plays/roles/sample-mysql/handlers/main.yml
---
# handlers file for sample-mysql
- name: restart mariadb
service:
name: "{{ mariadb_svc }}"
state: restarted
- name: change db password
mysql_user:
name: root
password: "{{ database_password }}"
login_unix_socket: /var/lib/mysql/mysql.sock
state: present
[automation@control roles]$ vim ~/plays/roles/sample-mysql/defaults/main.yml
---
# defaults file for sample-mysql
vg_name: vg_database
lv_name: lv_mysql
mariadb_svc: mariadb
packages:
- MariaDB-server
- python3-PyMySQL
database_password: redhat
[automation@control roles]$ vim ~/plays/roles/sample-mysql/vars/main.yml
---
# vars file for sample-mysql
hdd: /dev/vdb
[automation@control roles]$ vim ~/plays/roles/sample-mysql/templates/my.cnf.j2
{{ ansible_managed }}
[mysqld]
bind_address = {{ ansible_facts['default_ipv4']['address'] }}
skip_name_resolve
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
symbolic-links = 0
sql_mode = NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error = /var/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid
[automation@control roles]$ cd ..
vim mysql.yml
---
- name: Install mysql role
hosts: database
become: yes
vars_files:
- secret.yml
roles:
sample-mysql

Task 10: Create and Work with Roles (Some More)


Create a role called sample-apache and store it in /home/automation/plays/roles. The role should
satisfy the following requirements:
1. The httpd, mod_ssl and php packages are installed. Apache service is running and enabled
on boot.
2. Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port
TCP 443.
3. Apache service should be restarted every time the file /var/www/html/index.html is
modified.
4. A Jinja2 template file index.html.j2 is used to create the file /var/www/html/index.html with
the following content:
The address of the server is: IPV4ADDRESS
IPV4ADDRESS is the IP address of the managed node.
Create a playbook /home/automation/plays/apache.yml that uses the role and runs on hosts in the
webservers host group.

# Solution - Task 10
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
Solution:
[automation@control plays]$ cd roles/

[automation@control roles]$ ansible-galaxy init sample-apache


[automation@control roles]$ cd ..
[automation@control plays]$ vim ~/plays/roles/sample-apache/tasks/main.yml
---
# tasks file for sample-apache
- name: Install {{ packages }}
yum:
name: "{{ packages }}"
state: present
- name: Start and enable {{ web_svc }} service
service:
name: "{{ web_svc }}"
state: started
enabled: true
- name: Enable {{ firewall_rule }} in firewalld
firewalld:
service: "{{ item }}"
state: enabled
permanent: true
immediate: true
loop: "{{ firewall_rule }}"
- name: Copy the index.html.j2 template file to {{ web_doc_root }}
template:
src: index.html.j2
dest: "{{ web_doc_root }}/index.html"
notify: restart httpd
[automation@control plays]$ vim ~/plays/roles/sample-apache/defaults/main.yml
---
# defaults file for sample-apache
packages:
- httpd
- php
- mod_ssl
web_svc: httpd
firewall_rule:
- http
- https
web_doc_root: /var/www/html
[automation@control plays]$ vim ~/plays/roles/sample-apache/handlers/main.yml
---
# handlers file for sample-apache
- name: restart httpd
service:
name: "{{ web_svc }}"
state: restarted
[automation@control plays]$ vim ~/plays/roles/sample-
apache/templates/index.html.j2
The address of the server is:{{ ansible_facts['default_ipv4']['address'] }}
[automation@control plays]$ vim apache.yml
---
- name: Configure Apache Web Server
hosts: webservers
become: true
roles:
- role: sample-apache
[automation@control plays]$ ansible-playbook --syntax-check apache.yml
[automation@control plays]$ ansible-playbook apache.yml
[automation@control plays]$ ansible webservers -m command -a 'systemctl is-
active httpd' -b
node3.example.com | CHANGED | rc=0 >>
active
node2.example.com | CHANGED | rc=0 >>
active
[automation@control plays]$ ansible webservers -m command -a 'systemctl is-
enabled httpd' -b
node2.example.com | CHANGED | rc=0 >>
enabled
node3.example.com | CHANGED | rc=0 >>
enabled
[automation@control plays]$ curl http://node2.example.com
The address of the server is:192.168.122.52
[automation@control plays]$ curl http://node3.example.com
The address of the server is:192.168.122.53
[automation@control plays]$ curl -k https://node2.example.com
The address of the server is:192.168.122.52
[automation@control plays]$ curl -k https://node3.example.com
The address of the server is:192.168.122.53

cat apache.yml

---
- name: Configure apache
hosts: webservers
become: yes
roles:
- sample-apache

Task 11: Download Roles From an Ansible Galaxy and Use Them
Use Ansible Galaxy to download and install geerlingguy.haproxy role in
/home/automation/plays/roles.
Create a playbook /home/automation/plays/haproxy.yml that runs on servers in the proxy host
group and does the following:
1. Use geerlingguy.haproxy role to load balance request between hosts in the webservers host
group.
2. Use roundrobin load balancing method.
3. HAProxy backend servers should be configured for HTTP only (port 80).
4. Firewall is configured to allow all incoming traffic on port TCP 80.
If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the
web server (see task #10). Running the command again should return output from the other web
server.

# Solution - Task 11
---
- name: Configure HAPROXY
hosts: proxy
become: yes
roles:
- geerlingguy.haproxy
vars:
haproxy_frontend_port: 80
haproxy_frontend_mode: 'http'
haproxy_backend_balance_method: 'roundrobin'
haproxy_backend_servers:
- name: app1
address: 54.153.48.114:80
- name: app2
address: 18.144.27.107:80
tasks:
- name: Ensure firewalld and its dependencies are installed
yum:
name: firewalld
state: latest
- name: Ensure firewalld is running
service:
name: firewalld
state: started
enabled: yes
- name: Ensure firewalld is allowing to the traffic
firewalld:
port: 80/tcp
permanent: yes
immediate: yes
state: enabled
[automation@control plays]$ ansible-playbook --syntax-check haproxy.yml
[automation@control plays]$ ansible-playbook haproxy.yml
[automation@control plays]$ ansible proxy -m command -a 'systemctl is-active
haproxy' -b
node1.example.com | CHANGED | rc=0 >>
active
[automation@control plays]$ ansible proxy -m command -a 'systemctl is-enabled
haproxy' -b
node1.example.com | CHANGED | rc=0 >>
enabled
[automation@control plays]$ curl http://node1.example.com

Task 12: Security


Create a playbook /home/automation/plays/selinux.yml that runs on hosts in the webservers host
group and does the following:
1. Uses the selinux RHEL system role.
2. Enables httpd_can_network_connect SELinux boolean.
3. The change must survive system reboot.
Repons1 :
utomation@control plays]$ sudo yum install rhel-system-roles
[automation@control plays]$ ansible-galaxy list
[automation@control plays]$ mkdir group_vars/webservers
[automation@control plays]$ vim group_vars/webservers/selinux-vars.yml
---
selinux_policy: targeted
selinux_state: enforcing
selinux_booleans:
- { name: 'httpd_can_network_connect', state: 'on', persistent: 'yes' }
[automation@control plays]$ vim selinux.yml
---
- name: Configure SELinux
hosts: webservers
become: true
tasks:
- name: Configure SELinux
block:
- include_role:
name: rhel-system-roles.selinux
rescue:
# Fail if failed for a different reason than selinux_reboot_required.
- name: handle errors
fail:
msg: "role failed"
when: not selinux_reboot_required
- name: restart managed host
reboot:
msg: "Ansible updates triggered"
ignore_errors: true
- name: reapply the role
include_role:
name: rhel-system-roles.selinux
[automation@control plays]$ ansible-playbook --syntax-check selinux.yml
[automation@control plays]$ ansible-playbook selinux.yml
[automation@control plays]$ ansible webservers -m shell -a "semanage boolean -l
| grep '\'" -b
node3.example.com | CHANGED | rc=0 >
reponse2
---
- name: Security playbook
hosts: webservers
become: yes
vars:
selinux_booleans:
- name: httpd_can_network_connect
state: on
persistent: yes
roles:
- linux-system-roles.selinux

Task 13: Use Conditionals to Control Play Execution


Create a playbook /home/automation/plays/sysctl.yml that runs on all inventory hosts and does the
following:
1. If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10.
2. If a server has less than 2048MB of RAM, then the following error message is displayed:
Server memory less than 2048MB

# Solution - Task 13

---
- name: Set sysctl Parameters
hosts: all
become: true
tasks:
- name: Print error message
debug:
msg: "Server memory less than 2048MB"
when: "{{ansible_facts['memtotal_mb'] < 2048}}"
- name: set vm.swappiness to 10
sysctl:
name: vm.swappiness
value: '10'
state: present
when: "{{ansible_facts['memtotal_mb'] >= 2048}}"

Task 14: Use Archiving


Create a playbook /home/automation/plays/archive.yml that runs on hosts in the database host
group and does the following:
1. A file /mnt/mysql_backups/database_list.txt is created that contains the following line:
dev,test,qa,prod.
2. A gzip archive of the file /mnt/mysql_backups/database_list.txt is created and stored in
/mnt/mysql_backups/archive.gz.

# Solution - Task 14
---
- name: Use Archiving
hosts: database
become: yes
tasks:
- name: Check if the directory is exist
stat:
path: /mnt/mysql_backups/
register: backup_directory_status
- name: Create directory when not exist
file:
path: /mnt/mysql_backups/
state: directory
mode: 0775
owner: root
group: root
when: backup_directory_status.stat.exists == false
- name: Copy the content
copy:
content: "dev,test,qa,prod"
dest: /mnt/mysql_backups/database_list.txt
- name: Create archive
archive:
path: /mnt/mysql_backups/database_list.txt
dest: /mnt/mysql_backups/archive.gz
format: gz

Task 15: Work with Ansible Facts


Create a playbook /home/automation/plays/facts.yml that runs on hosts in the database host group
and does the following:
1. A custom Ansible fact server_role=mysql is created that can be retrieved from
ansible_local.custom.sample_exam when using Ansible setup module.

# Solution - Task 15
---
- name: Work with Ansible Facts
hosts: database
become: yes
tasks:
- name: Ensure directory is exist
file:
path: /etc/ansible/facts.d
state: directory
recurse: yes
- name: Copy the content to the file
copy:
content: "[sample_exam]\nserver_role=mysql\n"
dest: /etc/ansible/facts.d/custom.fact

Task 16: Software Packages


Create a playbook /home/automation/plays/packages.yml that runs on all inventory hosts and does
the following:
1. Installs tcpdump and mailx packages on hosts in the proxy host groups.
2. Installs lsof and mailx and packages on hosts in the database host groups.

# Solution - Task 16
---
- name: Install packages
hosts: all
become: yes
tasks:
- name: Installs tcpdump and mailx packages on hosts in the proxy host groups
yum:
name:
- tcpdump
- mailx
state: latest
when: inventory_hostname in groups['proxy']
- name: Installs lsof and mailx and packages on hosts in the database host groups
yum:
name:
- lsof
- mailx
state: latest
when: inventory_hostname in groups['database']

Task 17: Services


Create a playbook /home/automation/plays/target.yml that runs on hosts in the webserver host
group and does the following:
1. Sets the default boot target to multi-user.

# Solution - Task 17
---
- name: default boot target
hosts: webservers
become: yes
tasks:
- name: Set default boot target to multi-user
file:
src: /usr/lib/systemd/system/multi-user.target
dest: /etc/systemd/system/default.target
state: link

Task 18. Create and Use Templates to Create Customised Configuration Files
Create a playbook /home/automation/plays/server_list.yml that does the following:
1. Playbook uses a Jinja2 template server_list.j2 to create a file /etc/server_list.txt on hosts in
the databasehost group.
2. The file /etc/server_list.txt is owned by the automation user.
3. File permissions are set to 0600.
4. SELinux file label should be set to net_conf_t.
5. The content of the file is a list of FQDNs of all inventory hosts.
After running the playbook, the content of the file /etc/server_list.txt should be the following:
ansible2.hl.local
ansible3.hl.local
ansible4.hl.local
ansible5.hl.local
Note: if the FQDN of any inventory host changes, re-running the playbook should update the file
with the new values.

# Solution - Task 18
cat server_list.j2
################
{% for host in groups.all %}
{{ hostvars[host].inventory_hostname }}
{% endfor %}
################
cat server_list.yml
---
- name: Create and Use Templates to Create Customised Configuration Files
hosts: database
become: yes
tasks:
- name: Create server list
template:
src: ./server_list.j2
dest: /etc/server_list.txt
owner: cloud_user
mode: '0600'
setype: net_conf_t
1

You might also like