EX294 無料問題集「RedHat Red Hat Certified Engineer (RHCE) exam for Red Hat Enterprise Linux 8」
Create a playbook that changes the default target on all nodes to multi-user tarqet. Do this in playbook file called target.yml in /home/sandy/ansible
正解:
- name: change default target
hosts: all
tasks:
- name: change target
file:
src: /usr/lib/systemd/system/multi-user.target dest: /etc/systemd/system/default.target state: link
hosts: all
tasks:
- name: change target
file:
src: /usr/lib/systemd/system/multi-user.target dest: /etc/systemd/system/default.target state: link
Install the RHEL system roles package and create a playbook called timesync.yml that:
--> Runs over all managed hosts.
--> Uses the timesync role.
--> Configures the role to use the time server 192.168.10.254 ( Hear in redhat lab
use "classroom.example.com" )
--> Configures the role to set the iburst parameter as enabled.
--> Runs over all managed hosts.
--> Uses the timesync role.
--> Configures the role to use the time server 192.168.10.254 ( Hear in redhat lab
use "classroom.example.com" )
--> Configures the role to set the iburst parameter as enabled.
正解:
Solution as:
# pwd
home/admin/ansible/
# sudo yum install rhel-system-roles.noarch -y
# cd roles/
# ansible-galaxy list
# cp -r /usr/share/ansible/roles/rhelsystem-roles.timesync .
# vim timesync.yml
---
- name: timesynchronization
hosts: all
vars:
timesync_ntp_provider: chrony
timesync_ntp_servers:
- hostname: classroom.example.com _ in exam its ip-address
iburst: yes
timezone: Asia/Kolkata
roles:
- rhel-system-roles.timesync
tasks:
- name: set timezone
timezone:
name: "{{ timezone }}"
:wq!
timedatectl list-timezones | grep india
# ansible-playbook timesync.yml --syntax-check
# ansible-playbook timesync.yml
# ansible all -m shell -a 'chronyc sources -v'
# ansible all -m shell -a 'timedatectl'
# ansible all -m shell -a 'systemctl is-enabled chronyd'
# pwd
home/admin/ansible/
# sudo yum install rhel-system-roles.noarch -y
# cd roles/
# ansible-galaxy list
# cp -r /usr/share/ansible/roles/rhelsystem-roles.timesync .
# vim timesync.yml
---
- name: timesynchronization
hosts: all
vars:
timesync_ntp_provider: chrony
timesync_ntp_servers:
- hostname: classroom.example.com _ in exam its ip-address
iburst: yes
timezone: Asia/Kolkata
roles:
- rhel-system-roles.timesync
tasks:
- name: set timezone
timezone:
name: "{{ timezone }}"
:wq!
timedatectl list-timezones | grep india
# ansible-playbook timesync.yml --syntax-check
# ansible-playbook timesync.yml
# ansible all -m shell -a 'chronyc sources -v'
# ansible all -m shell -a 'timedatectl'
# ansible all -m shell -a 'systemctl is-enabled chronyd'
In /home/sandy/ansible/ create a playbook called logvol.yml. In the play create a logical volume called Iv0 and make it of size 1500MiB on volume group vgO If there is not enough space in the volume group print a message "Not enough space for logical volume" and then make a 800MiB Iv0 instead. If the volume group still doesn't exist, create a message "Volume group doesn't exist" Create an xfs filesystem on all Iv0 logical volumes. Don't mount the logical volume.
正解:
Solution as:

Topic 1, LAB SETUP
You will need to set up your lab by creating 5 managed nodes and one control node.
So 6 machines total. Download the free RHEL8 iso from Red Hat Developers website.
***Control node you need to set up***
You need to create some static ips on your managed nodes then on the control node set them up in the
/etc/hosts file as follows:
vim /etc/hosts
10.0.2.21 node1.example.com
10.0.2.22 node2.example.com
10.0.2.23 node3.example.com
10.0.2.24 node4.example.com
10.0.2.25 node5.example.com
yum -y install ansible
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL
su - ansible; ssh-keygen
ssh-copy-id node1.example.com
ssh-copy-id node2.example.com
ssh-copy-id node3.example.com
ssh-copy-id node4.example.com
ssh-copy-id node5.example.com
***Each manage node setup***
First, add an extra 2GB virtual harddisk to each control node 1,2,3. Then add an extra hard disk to control
node 4. Do not add an extra hard disk to node 5. When you start up these machines the extra disks should be
automatically located at /dev/sdb (or /dev/vdb depending on your hypervisor).
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
Note python3 should be installed by default, however if it is not then on both the control node and managed
nodes you can install it also set the default python3 if you are having trouble with python2 being the default.
yum -y install python3
alternatives --set python /usr/bin/python3
All machines need the repos available. You did this in RHSCA. To set up locally you just need to do the same
for each machine. Attach the rhel8 iso as a disk to virtualbox, kvm or whatever hypervisor you are using (this
will be /dev/sr0). Then inside the machine:
mount /dev/sr0 to /mnt
Then you will have all the files from the iso in /mnt.
mkdir /repo
cp -r /mnt /repo
vim /etc/yum.repos.d/base.repo
Inside this file:
[baseos]
name=baseos
baseurl=file:///repo/BaseOS
gpgcheck=0
Also the appstream
vim /etc/yum.repos.d/appstream.repo
Inside this file:
[appstream]
name=appstream
baseurl=file:///repo/AppStream
gpgcheck=0

Topic 1, LAB SETUP
You will need to set up your lab by creating 5 managed nodes and one control node.
So 6 machines total. Download the free RHEL8 iso from Red Hat Developers website.
***Control node you need to set up***
You need to create some static ips on your managed nodes then on the control node set them up in the
/etc/hosts file as follows:
vim /etc/hosts
10.0.2.21 node1.example.com
10.0.2.22 node2.example.com
10.0.2.23 node3.example.com
10.0.2.24 node4.example.com
10.0.2.25 node5.example.com
yum -y install ansible
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL
su - ansible; ssh-keygen
ssh-copy-id node1.example.com
ssh-copy-id node2.example.com
ssh-copy-id node3.example.com
ssh-copy-id node4.example.com
ssh-copy-id node5.example.com
***Each manage node setup***
First, add an extra 2GB virtual harddisk to each control node 1,2,3. Then add an extra hard disk to control
node 4. Do not add an extra hard disk to node 5. When you start up these machines the extra disks should be
automatically located at /dev/sdb (or /dev/vdb depending on your hypervisor).
useradd ansible
echo password | passwd --stdin ansible
echo "ansible ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/ansible
Note python3 should be installed by default, however if it is not then on both the control node and managed
nodes you can install it also set the default python3 if you are having trouble with python2 being the default.
yum -y install python3
alternatives --set python /usr/bin/python3
All machines need the repos available. You did this in RHSCA. To set up locally you just need to do the same
for each machine. Attach the rhel8 iso as a disk to virtualbox, kvm or whatever hypervisor you are using (this
will be /dev/sr0). Then inside the machine:
mount /dev/sr0 to /mnt
Then you will have all the files from the iso in /mnt.
mkdir /repo
cp -r /mnt /repo
vim /etc/yum.repos.d/base.repo
Inside this file:
[baseos]
name=baseos
baseurl=file:///repo/BaseOS
gpgcheck=0
Also the appstream
vim /etc/yum.repos.d/appstream.repo
Inside this file:
[appstream]
name=appstream
baseurl=file:///repo/AppStream
gpgcheck=0
Create a playbook called balance.yml as follows:
* The playbook contains a play that runs on hosts in balancers host group and uses
the balancer role.
--> This role configures a service to loadbalance webserver requests between hosts
in the webservers host group.curl
--> When implemented, browsing to hosts in the balancers host group (for example
http://node5.example.com) should produce the following output:
Welcome to node3.example.com on 192.168.10.z
--> Reloading the browser should return output from the alternate web server:
Welcome to node4.example.com on 192.168.10.a
* The playbook contains a play that runs on hosts in webservers host group and uses
the phphello role.
--> When implemented, browsing to hosts in the webservers host group with the URL /
hello.php should produce the following output:
Hello PHP World from FQDN
--> where FQDN is the fully qualified domain name of the host. For example,
browsing to http://node3.example.com/hello.php, should produce the following output:
Hello PHP World from node3.example.com
* Similarly, browsing to http://node4.example.com/hello.php, should produce the
following output:
Hello PHP World from node4.example.com
* The playbook contains a play that runs on hosts in balancers host group and uses
the balancer role.
--> This role configures a service to loadbalance webserver requests between hosts
in the webservers host group.curl
--> When implemented, browsing to hosts in the balancers host group (for example
http://node5.example.com) should produce the following output:
Welcome to node3.example.com on 192.168.10.z
--> Reloading the browser should return output from the alternate web server:
Welcome to node4.example.com on 192.168.10.a
* The playbook contains a play that runs on hosts in webservers host group and uses
the phphello role.
--> When implemented, browsing to hosts in the webservers host group with the URL /
hello.php should produce the following output:
Hello PHP World from FQDN
--> where FQDN is the fully qualified domain name of the host. For example,
browsing to http://node3.example.com/hello.php, should produce the following output:
Hello PHP World from node3.example.com
* Similarly, browsing to http://node4.example.com/hello.php, should produce the
following output:
Hello PHP World from node4.example.com
正解:
Solution as:
# pwd
/home/admin/ansible/
# vim balancer.yml
---
- name: Including phphello role
hosts: webservers
roles:
- ./roles/phphello
- name: Including balancer role
hosts: balancer
roles:
- ./roles/balancer
:wq!
# ansible-playbook balancer.yml --syntax-check
# ansible-playbook balancer.yml
# pwd
/home/admin/ansible/
# vim balancer.yml
---
- name: Including phphello role
hosts: webservers
roles:
- ./roles/phphello
- name: Including balancer role
hosts: balancer
roles:
- ./roles/balancer
:wq!
# ansible-playbook balancer.yml --syntax-check
# ansible-playbook balancer.yml