Professional Documents
Culture Documents
Deep Dive Into CodeReady Containers Deployment On Linux
Deep Dive Into CodeReady Containers Deployment On Linux
Deep Dive Into CodeReady Containers Deployment On Linux
On the Linux host, CRC makes use of libvirt to create a network, storage pool, and a CRC virtual machine. The virtual machine’s data is persisted on the
libvirt volume which ensures that the objects created by the user in OpenShift will survive CRC restarts.
The OpenShift instance comes with a fixed set of PersistentVolumes. Users can attach these volumes to the application pods by creating
PersistentVolumeClaims (PVC). All of the volume access modes are supported: ReadWriteOnce, ReadWriteMany, and ReadOnlyMany. The content of the
persistent volumes is saved on the host and can be found in the /var/mnt/pv-data directory.
How about the integrated image registry? The image registry in OpenShift is backed by a PersistentVolume and is exposed via a public route default-
route-openshift-image-registry.apps-crc.testing. Users can use this route to push their container images to the registry before launching
them on OpenShift.
The last thing to note in the diagram is the DNS configuration. Why is this needed? CRC configures the DNS resolution on the Linux host so that the
connections to the endpoints api.crc.testing and *.apps-crc.testing are routed to the OpenShift instance. NetworkManager, which is a
requirement for CRC to work, is used to achieve this DNS configuration. CRC instructs the NetworkManager to spin up a dnsmasq instance, which
forwards the DNS queries for the OpenShift endpoints to the second dnsmasq instance which was deployed inside the virtual machine. This second
dnsmasq instance actually resolves the queries.
The CRC binary consists of four parts. The first part is the CRC executable. The second part is the admin-helper-linux utility that is used for updating
the /etc/hosts file. After that comes the crc-driver-libvirt daemon executable, which implements functions specific to the libvirt virtualization
and abstracts the virtualization details away from the CRC core. Finally, the so-called CRC bundle (crc_libvirt_4.6.9.crcbundle in the diagram) is
the last part of the CRC binary. This bundle contains a virtual machine image and accounts for the majority of the size of the CRC binary.
Now let’s get back to the CRC setup. At the initial stage, the ./crc setup command extracts all the components appended to the CRC executable and
places them below the ~/.crc directory. The embedded CRC bundle is a tar.xz archive whose contents is immediately decompressed into
~/.crc/cache. This bundle contains the following files:
The crc-bundle-info.json carries the bundle metadata. CRC refers to it throughout the deployment process.
The virtual machine image crc.qcow2 contains a pre-installed OpenShift node. This image will be used as a backing image for the CRC virtual machine’s
disk image.
The id_rsa_crc bootstrap key is used by CRC for SSHing into the virtual machine at its first start. After connecting to the virtual machine, CRC
generates a new unique SSH key pair and adds it to the machine’s ~core/.ssh/authorized_keys file. The original bootstrap SSH key is removed from
this file, and hence can no longer be used to access the virtual machine.
The kubeadmin-password file holds the password of the kubeadmin user on OpenShift.
The kubeconfig file allows logging into OpenShift as user kube:admin. It includes the user’s private key that is needed for successful authentication.
The oc executable is an oc client whose version matches the version of the bundled OpenShift cluster.
After the extraction of the CRC components is complete, the ~/.crc directory looks like this:
1 $ tree --noreport .crc
2 .crc
3 ├── bin
4 │ ├── admin-helper-linux
5 │ ├── crc-driver-libvirt
6 │ └── oc
7 │ └── oc -> /home/anosek/.crc/cache/crc_libvirt_4.6.9/oc
8 ├── cache
9 │ ├── crc_libvirt_4.6.9
10│ │ ├── crc-bundle-info.json
11│ │ ├── crc.qcow2
12│ │ ├── id_ecdsa_crc
13│ │ ├── kubeadmin-password
14│ │ ├── kubeconfig
15│ │ └── oc
16│ └── crc_libvirt_4.6.9.crcbundle
17└── crc.json
The next notable step carried out by the CRC setup is configuring DNS on the Linux host. CRC configures DNS, so that the connections to the endpoints
api.crc.testing and *.apps-crc.testing are routed to the OpenShift instance. It is known ahead of time that this OpenShift instance is going to
expose its endpoints on a hard-coded IP address 192.168.130.11. So, how does CRC ensure the proper DNS resolution of the OpenShift endpoints? To
achieve that, CRC creates a /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf file with the following configuration:
1[main]
2dns=dnsmasq
This configuration instructs the NetworkManager to first, spin up a dnsqmasq instance, and second, modify the /etc/resolv.conf on the machine to
use this instance as a default DNS server. In the next step, CRC configures the dnsmasq server by creating a
/etc/NetworkManager/dnsmasq.d/crc.conf configuration file with the following content:
1server=/apps-crc.testing/192.168.130.11
2server=/crc.testing/192.168.130.11
This forwards DNS queries for the crc.testing and apps-crc.testing domains plus all their subdomains to the DNS server 192.168.130.11. This
DNS server will be deployed inside the CRC virtual machine and will be handling the resolution of the OpenShift endpoints.
Note that the dnsmasq forwarder as described above is only created if your host doesn’t use systemd-resolved for DNS resolution. If your host uses
systemd-resolved, then CRC will configure the forwarding in systemd-resolved instead of spinning up the additional dnsmasq forwarder.
The last step executed by the ./crc setup command is creating a libvirt network. What needs to be done here? CRC creates a libvirt network called
crc of type NAT. The only host on the network will get the IP address 192.168.130.11:
1 $ virsh net-dumpxml crc
2 <network connections='1'>
3 <name>crc</name>
4 <uuid>49eee855-d342-46c3-9ed3-b8d1758814cd</uuid>
5 <forward mode='nat'>
6 <nat>
7 <port start='1024' end='65535'/>
8 </nat>
9 </forward>
10 <bridge name='crc' stp='on' delay='0'/>
11 <mac address='52:54:00:fd:be:d0'/>
12 <ip family='ipv4' address='192.168.130.1' prefix='24'>
13 <dhcp>
14 <host mac='52:fd:fc:07:21:82' ip='192.168.130.11'/>
15 </dhcp>
16 </ip>
17</network>
This network will be hosting the CRC virtual machine. This virtual machine will be created with the MAC address 52:fd:fc:07:21:82. The above
configuration assigns the fixed IP address 192.168.130.11 to this virtual machine. Both the MAC address 52:fd:fc:07:21:82 and the IP address
192.168.130.11 are hard-coded in CRC.
Creating the libvirt network was the last setup step that I wanted to discuss. In the next section, we are going to create and launch the CRC virtual
machine.
Conclusion
This blog covered the deployment of CodeReady Containers to a Linux host. We began by reviewing the prerequisites that are needed to deploy CRC. In
the deployment overview section, we showed how CRC interacts with libvirt to spin up the OpenShift virtual machine. We also discussed the DNS
configuration made by CRC. Before deploying CRC, we customized the CRC configuration and provided the virtual machine with additional resources
beyond the factory defaults. We discussed the CRC setup and start phases in great detail. Finally, I shared some of the convenience commands I like to
use.
Update 3/29/2021: I also have a video related to this topic, it can be found here.
Hope you enjoyed the CodeReady Containers tour presented in this blog. If you have any questions or comments, please leave them in the comment
section below. I look forward to hearing from you!
Posted by Ales Nosek Feb 28th, 2021 4:38 pm development
Tweet
« Monitoring Apache Airflow using Prometheus Announcing: The Software Practitioner YouTube Channel »
Comments
Recent Posts
Speaking at Red Hat Summit 2022
Announcing: The Software Practitioner YouTube Channel
Deep Dive into CodeReady Containers Deployment on Linux
Monitoring Apache Airflow using Prometheus
14 Best Practices for Developing Applications on OpenShift
About Me
Ales Nosek
Aleš Nosek
Sr. Container Application Architect
at Red Hat
Red Hat
View profile
Copyright © 2022 - Ales Nosek - Powered by Octopress