Professional Documents
Culture Documents
Task 1: Ansible Installation and Configuration: Task 2: Ad-Hoc Commands
Task 1: Ansible Installation and Configuration: Task 2: Ad-Hoc Commands
Task 1: Ansible Installation and Configuration: Task 2: Ad-Hoc Commands
Install ansible package on the control node (including any dependencies) and configure the
following:
1. Create a regular user automation with the password of devops. Use this user for all sample
exam tasks.
2. All playbooks and other Ansible configuration that you create for this sample exam should
be stored in /home/automation/plays.
Create a configuration file /home/automation/plays/ansible.cfg to meet the following requirements:
1. The roles path should include /home/automation/plays/roles, as well as any other path that
may be required for the course of the sample exam.
2. The inventory file path is /home/automation/plays/inventory.
3. Privilege escallation is disabled by default.
4. Ansible should be able to manage 10 hosts at a single time.
5. Ansible should connect to all managed nodes using the cloud_user user.
Create an inventory file /home/automation/plays/inventory with the following:
1. ansible2.hl.local is a member of the proxy host group.
2. ansible3.hl.local is a member of the webservers host group.
3. ansible4.hl.local is a member of the webservers host group.
4. ansible5.hl.local is a member of the database host group.
# Solution Task1
cat inventory
[proxy]
77726771c.mylabserver.com
[webservers]
77726772c.mylabserver.com
77726773c.mylabserver.com
[database]
77726774c.mylabserver.com
[prod:children]
database
cat ansible.cfg
[defaults]
roles_path = ./roles
inventory = ./inventory
remote_user = cloud_user
forks = 10
[prvilege_escalation]
become = False
# Solution - Task 3
cat motd.yml
---
- name: Changing MOTD
hosts: all
become: yes
tasks:
- name: Copy the content to HAProxy
copy:
content: "Welcome to HAProxy server\n"
dest: /etc/motd
when: "'proxy' in group_names"
- name: Copy the content to Apache
copy:
content: "Welcome to Apache server\n"
dest: /etc/motd
when: "'webservers' in group_names"
- name: Copy the content to MySQL
copy:
content: "Welcome to MySQL server\n"
dest: /etc/motd
when: "'database' in group_names"
Task 4: Configure SSH Server
Create a playbook /home/automation/plays/sshd.yml that runs on all inventory hosts and configures
SSHD daemon as follows:
1. banner is set to /etc/motd
2. X11Forwarding is disabled
3. MaxAuthTries is set to 3
# Solution - Task 4
cat sshd.yml
---
- name: Change SSH configuration
hosts: all
become: yes
tasks:
- name: Change default banner path
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^Banner'
line: 'Banner /etc/motd'
- name: X11 Forwarding is disabled
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^X11Forwarding'
line: 'X11Forwarding no'
- name: MaxAuthTries is set to 3
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#MaxAuthTries'
line: 'MaxAuthTries 3'
- name: Restart the sshd service
service:
name: sshd
state: restarted
enabled: yes
# Solution - Task 6
[automation@control plays]$ mkdir vars
[automation@control plays]$ vim vars/user_list.yml
---
users:
- username: alice
uid: 1201
- username: vincent
uid: 1202
- username: sandy
uid: 2201
- username: patrick
---
- name: Create users
hosts: all
become: yes
vars_files:
- ./users_list.yml
- ./secret.yml
tasks:
- name: Ensure group is exist
group:
name: wheel
state: present
- name: Create users
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['webservers']
- "item.uid|string|first == '1'"
- name: Create users in database
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
uid: "{{ item.uid }}"
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['database']
- "item.uid|string|first == '2'"
Task 7: Scheduled Tasks
Create a playbook /home/automation/plays/regular_tasks.yml that runs on servers in the proxy host
group and does the following:
1. A root crontab record is created that runs every hour.
2. The cron job appends the file /var/log/time.log with the output from the date command.
# Solution - Task 7
---
- name: Scheduled tasks
hosts: all
become: yes
tasks:
- name: Ensure file exists
file:
path: /var/log/time.log
state: touch
mode: 0644
- name: Create cronjob for root user
cron:
name: "check time"
hour: "*/1"
user: root
job: "date >> /var/log/time.log"
Task 8: Software Repositories
Create a playbook /home/automation/plays/repository.yml that runs on servers in the database host
group and does the following:
1. A YUM repository file is created.
2. The name of the repository is mysql56-community.
3. The description of the repository is “MySQL 5.6 YUM Repo”.
4. Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
5. Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
6. Repository GPG check is enabled.
7. Repository is enabled.
# Solution - Task 8
---
- name: Software repositories
hosts: database
become: yes
tasks:
- name: Create msyql repository
yum_repository:
name: mysql56-community
description: "MySQL 5.6 YUM Repo"
baseurl: "http://repo.mysql.com/yum/mysql-5.6-
community/el/7/x86_64/"
enabled: yes
gpgcheck: yes
gpgkey: "http://repo.mysql.com/RPM-GPG-KEY-mysql"
autre reponse :
---
- name: Configure Repository
hosts: database
become: true
tasks:
- name: Configure MariaDB 10.5 Repository
yum_repository:
file: MariaDB-10.5
name: MariaDB-10.5
description: "MariaDB 10.5 YUM Repo"
baseurl: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/
enabled: true
gpgcheck: true
gpgkey: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/RPM-GPG-KEY-
MariaDB
state: present
- name: Check if MariaDB-10.5.repo file exists
stat:
path: /etc/yum.repos.d/MariaDB-10.5.repo
register: repo_result
- name: Print repo_result variable
debug:
var: repo_result
- name: Install the GPG Key for MariaDB 10.5 Repository
rpm_key:
key: http://192.168.122.1/~didith/rhel8.2/MariaDB/10.5/RPM-GPG-KEY-
MariaDB
state: present
when: repo_result['stat']['exists']
- name: Ensure module_hotfixes = 1 is present in config file
lineinfile:
path: /etc/yum.repos.d/MariaDB-10.5.repo
line: "module_hotfixes = 1"
insertafter: 'enabled = 1'
state: present
when: repo_result['stat']['exists']
[automation@control plays]$ ansible-playbook --syntax-check repository.yml
[automation@control plays]$ ansible-playbook repository.yml
[automation@control plays]$ ansible database -m command -a 'cat
/etc/yum.repos.d/MariaDB-10.5.repo' -b
Task 9: Create and Work with Roles
Create a role called sample-mysql and store it in /home/automation/plays/roles. The role should
satisfy the following requirements:
1. A primary partition number 1 of size 800MB on device /dev/sdb is created.
2. An LVM volume group called vg_database is created that uses the primary partition created
above.
3. An LVM logical volume called lv_mysql is created of size 512MB in the volume group
vg_database.
4. An XFS filesystem on the logical volume lv_mysql is created.
5. Logical volume lv_mysql is permanently mounted on /mnt/mysql_backups.
6. mysql-community-server package is installed.
7. Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
8. MySQL root user password should be set from the variable database_password (see task
#5).
9. MySQL server should be started and enabled on boot.
10.MySQL server configuration file is generated from the my.cnf.j2 Jinja2 template with the
following content:
[mysqld]
bind_address = {{ ansible_default_ipv4.address }}
skip_name_resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Create a playbook /home/automation/plays/mysql.yml that uses the role and runs on hosts in the
database host group.
# Solution - Task 9
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
cat mysql.ymlSolution:
[automation@control plays]$ mkdir roles
# Solution - Task 10
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
Solution:
[automation@control plays]$ cd roles/
cat apache.yml
---
- name: Configure apache
hosts: webservers
become: yes
roles:
- sample-apache
Task 11: Download Roles From an Ansible Galaxy and Use Them
Use Ansible Galaxy to download and install geerlingguy.haproxy role in
/home/automation/plays/roles.
Create a playbook /home/automation/plays/haproxy.yml that runs on servers in the proxy host
group and does the following:
1. Use geerlingguy.haproxy role to load balance request between hosts in the webservers host
group.
2. Use roundrobin load balancing method.
3. HAProxy backend servers should be configured for HTTP only (port 80).
4. Firewall is configured to allow all incoming traffic on port TCP 80.
If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the
web server (see task #10). Running the command again should return output from the other web
server.
# Solution - Task 11
---
- name: Configure HAPROXY
hosts: proxy
become: yes
roles:
- geerlingguy.haproxy
vars:
haproxy_frontend_port: 80
haproxy_frontend_mode: 'http'
haproxy_backend_balance_method: 'roundrobin'
haproxy_backend_servers:
- name: app1
address: 54.153.48.114:80
- name: app2
address: 18.144.27.107:80
tasks:
- name: Ensure firewalld and its dependencies are installed
yum:
name: firewalld
state: latest
- name: Ensure firewalld is running
service:
name: firewalld
state: started
enabled: yes
- name: Ensure firewalld is allowing to the traffic
firewalld:
port: 80/tcp
permanent: yes
immediate: yes
state: enabled
[automation@control plays]$ ansible-playbook --syntax-check haproxy.yml
[automation@control plays]$ ansible-playbook haproxy.yml
[automation@control plays]$ ansible proxy -m command -a 'systemctl is-active
haproxy' -b
node1.example.com | CHANGED | rc=0 >>
active
[automation@control plays]$ ansible proxy -m command -a 'systemctl is-enabled
haproxy' -b
node1.example.com | CHANGED | rc=0 >>
enabled
[automation@control plays]$ curl http://node1.example.com
# Solution - Task 13
---
- name: Set sysctl Parameters
hosts: all
become: true
tasks:
- name: Print error message
debug:
msg: "Server memory less than 2048MB"
when: "{{ansible_facts['memtotal_mb'] < 2048}}"
- name: set vm.swappiness to 10
sysctl:
name: vm.swappiness
value: '10'
state: present
when: "{{ansible_facts['memtotal_mb'] >= 2048}}"
# Solution - Task 14
---
- name: Use Archiving
hosts: database
become: yes
tasks:
- name: Check if the directory is exist
stat:
path: /mnt/mysql_backups/
register: backup_directory_status
- name: Create directory when not exist
file:
path: /mnt/mysql_backups/
state: directory
mode: 0775
owner: root
group: root
when: backup_directory_status.stat.exists == false
- name: Copy the content
copy:
content: "dev,test,qa,prod"
dest: /mnt/mysql_backups/database_list.txt
- name: Create archive
archive:
path: /mnt/mysql_backups/database_list.txt
dest: /mnt/mysql_backups/archive.gz
format: gz
# Solution - Task 15
---
- name: Work with Ansible Facts
hosts: database
become: yes
tasks:
- name: Ensure directory is exist
file:
path: /etc/ansible/facts.d
state: directory
recurse: yes
- name: Copy the content to the file
copy:
content: "[sample_exam]\nserver_role=mysql\n"
dest: /etc/ansible/facts.d/custom.fact
# Solution - Task 16
---
- name: Install packages
hosts: all
become: yes
tasks:
- name: Installs tcpdump and mailx packages on hosts in the proxy host groups
yum:
name:
- tcpdump
- mailx
state: latest
when: inventory_hostname in groups['proxy']
- name: Installs lsof and mailx and packages on hosts in the database host groups
yum:
name:
- lsof
- mailx
state: latest
when: inventory_hostname in groups['database']
# Solution - Task 17
---
- name: default boot target
hosts: webservers
become: yes
tasks:
- name: Set default boot target to multi-user
file:
src: /usr/lib/systemd/system/multi-user.target
dest: /etc/systemd/system/default.target
state: link
Task 18. Create and Use Templates to Create Customised Configuration Files
Create a playbook /home/automation/plays/server_list.yml that does the following:
1. Playbook uses a Jinja2 template server_list.j2 to create a file /etc/server_list.txt on hosts in
the databasehost group.
2. The file /etc/server_list.txt is owned by the automation user.
3. File permissions are set to 0600.
4. SELinux file label should be set to net_conf_t.
5. The content of the file is a list of FQDNs of all inventory hosts.
After running the playbook, the content of the file /etc/server_list.txt should be the following:
ansible2.hl.local
ansible3.hl.local
ansible4.hl.local
ansible5.hl.local
Note: if the FQDN of any inventory host changes, re-running the playbook should update the file
with the new values.
# Solution - Task 18
cat server_list.j2
################
{% for host in groups.all %}
{{ hostvars[host].inventory_hostname }}
{% endfor %}
################
cat server_list.yml
---
- name: Create and Use Templates to Create Customised Configuration Files
hosts: database
become: yes
tasks:
- name: Create server list
template:
src: ./server_list.j2
dest: /etc/server_list.txt
owner: cloud_user
mode: '0600'
setype: net_conf_t
1