Professional Documents
Culture Documents
Karbon-V2 4 Compressed
Karbon-V2 4 Compressed
Karbon-V2 4 Compressed
1. Karbon Overview...................................................................................................... 4
Kubernetes....................................................................................................................................................................4
2. Requirements.............................................................................................................5
3. Enabling Karbon.......................................................................................................7
4. Airgap...........................................................................................................................9
Deploying the Karbon Airgap.............................................................................................................................. 9
Disabling the Airgap................................................................................................................................................ 11
5. Cluster Setup............................................................................................................12
Creating a Cluster.................................................................................................................................................... 14
Creating a Cluster Using the Karbon API..................................................................................................... 20
Cluster Deployment Attributes...............................................................................................................21
6. Karbon Layout........................................................................................................26
Clusters........................................................................................................................................................................ 26
Summary......................................................................................................................................................... 27
Alerts................................................................................................................................................................ 28
Tasks................................................................................................................................................................. 29
Storage Class................................................................................................................................................30
Volume.............................................................................................................................................................30
Add-On............................................................................................................................................................. 31
Nodes................................................................................................................................................................ 31
OS Images.................................................................................................................................................................. 32
7. Cluster Administration.........................................................................................34
Downloading the Kubeconfig............................................................................................................................ 34
Logging on to the Karbonctl............................................................................................................................. 35
Downloading Images............................................................................................................................................. 35
Creating a Storage Class..................................................................................................................................... 36
Deleting a Storage Class......................................................................................................................... 38
Creating a Volume.................................................................................................................................................. 38
Deleting a Volume...................................................................................................................................... 39
Creating a Node Pool........................................................................................................................................... 40
Updating the Number of Worker Nodes............................................................................................41
Updating Node Pool Metadata.............................................................................................................. 41
Configuring GPU Support....................................................................................................................................42
Access and Authentication................................................................................................................................. 43
Accessing Locked Nodes........................................................................................................................ 43
Rotating Certificates..............................................................................................................................................44
Migrating DVP and CSI to Use Certificate-Based Authentication....................................................... 44
ii
Restarting the Karbon Service.......................................................................................................................... 45
Backing up Etcd...................................................................................................................................................... 45
Stopping a Karbon Cluster................................................................................................................................. 46
Starting a Karbon Cluster........................................................................................................................47
8. Upgrades....................................................................................................................51
Karbon Upgrades..................................................................................................................................................... 51
Upgrading a Node OS Image.............................................................................................................................52
Upgrading Kubernetes.......................................................................................................................................... 53
Upgrading Kubernetes Using the Karbonctl....................................................................................53
Upgrading the Karbon Airgap........................................................................................................................... 54
Updating OS Images and Kubernetes for Airgap...................................................................................... 55
9. Options...................................................................................................................... 57
Enabling Alert Forwarding.................................................................................................................................. 57
Disabling Alert Forwarding..................................................................................................................... 57
Disabling Infra Logging........................................................................................................................................ 58
Enabling Infra Logging............................................................................................................................. 58
Configuring a Private Registry.......................................................................................................................... 58
Deleting a Private Registry.....................................................................................................................59
Enabling Log Forwarding....................................................................................................................................60
Disabling Log Forwarding........................................................................................................................ 61
Network Segmentation.......................................................................................................................................... 61
10. Add-Ons.................................................................................................................. 62
Logging........................................................................................................................................................................62
Monitoring.................................................................................................................................................................. 63
iii
1
KARBON OVERVIEW
Nutanix Karbon is a curated turnkey offering that provides simplified provisioning and
operations of Kubernetes clusters. Kubernetes is an open source container orchestration
system for deploying and managing container-based applications. You can also set up an offline
Karbon environment using the Karbon airgap, see Airgap on page 9.
Karbon uses the CentOS Linux-based operating systems for Karbon-enabled Kubernetes cluster
creation. Linux containers provide the flexibility to deploy applications in different environments
with consistent results.
Karbon streamlines the deployment and management of Kubernetes clusters with a simple GUI
integrated into Prism Central (PC). Kibana, the built-in add-on, lets you filter and parse logs for
systems, pods, and nodes. Prometheus, another add-on, provides a monitoring mechanism that
triggers alerts on your cluster. Karbon also uses Pulse, Prism's health-monitoring system, which
interacts with Nutanix Support to expedite cluster issue resolutions.
To set up your Karbon environment, perform the following tasks:
• Ensure that your environment meets the requirements, see Requirements on page 5.
• Enable Karbon through Prism Central (PC) and set up a cluster, see Cluster Setup on
page 12.
• Download the kubeconfig, see Downloading the Kubeconfig on page 34.
• Configure access, see Access and Authentication on page 43.
Kubernetes
Karbon orchestrates Kubernetes clusters to simplify the provisioning and management
of containerized applications. Kubernetes packages applications in their own dedicated
containers together with all of the required operational components for running the application.
Containers , which run inside of pods on top of nodes, are the core building block of Kubernetes
architecture.
Containerized applications are simple to manage, easy to deploy, and portable as they are
abstracted from the OS of the host. Since different containers share an operational kernel, they
do not require as much compute capacity as a VM, making them "lightweight".
Using Karbon to manage Kubernetes operations requires a basic familiarity with key Kubernetes
concepts.
Reference Kubernetes documentation to gain a better understanding of containerization,
cluster architecture, workloads, and storage concepts.
2
REQUIREMENTS
Meet the indicated requirements prior to enabling and deploying Karbon.
Cluster Requirements
Ensure that the configuration of the Prism Element (PE) cluster meets the following
specifications:
• AHV
• A minimum 120 MB of memory and 700 MB of disk space in PC
Do the following before deploying Karbon:
• See Karbon Software Compatibility in the Nutanix Karbon Release Notes for Prism Central
(PC) and Prism Element (PE) compatibility requirements.
• Register the PE cluster to PC.
• Configure the cluster virtual IP address and the iSCSI data services IP address on the
designated PE cluster.
Note: Karbon does not recognize changes to the iSCSI Data Service IP.
• Configure the Network Time Protocol (NTP) and the Domain Name System (DNS) in PE and
PC.
• Synchronize the time of the cluster, PC, and the clients that use kubectl.
• Open the required ports and whitelist the required domains, see table below.
• (Production clusters only) Configure an AHV network with IP address management (IPAM)
enabled and IP pools configured.
• (Development clusters only) Configure an AHV network with IPAM and IP address pools or
with an external DHCP network.
• (Airgap only) ensure you are using a Linux-based web server.
Note:
• *.cloudfront.net
• *.quay.io
• ntnx-portal.s3.amazonaws.com
• portal.nutanix.com
• release-api.nutanix.com
• s3*.amazonaws.com
• *.compute-1.amazonaws.com
Note: Karbon does not automatically recognize changes to proxy settings, which can cause
cluster unavailability. Modifying proxy settings requires restarting the Karbon service on the
cluster, see Restarting the Karbon Service on page 45.
Port Requirements
For a list of required Karbon and Karbon Airgap ports go to Support & Insights Portal >
Documentation > Ports and Protocols. The Port Reference on the Ports and Protocols page
provides detailed port information for Nutanix products and services, including port sources
and destinations, service descriptions, directionality, and protocol requirements.
3
ENABLING KARBON
Enable Karbon through Prism Central.
Procedure
5. To access Karbon, once you receive the message Karbon is successfully enabled, click the
here link in the message.
What to do next
Ensure that you are running a general availability (GA) version of Karbon by performing an
inventory check using the Life-Cycle Manager (LCM), see Karbon Upgrades on page 51. If
you are using LCM at a dark site, see Upgrading the Karbon Airgap on page 54.
4
AIRGAP
Use the Airgap to manage Kubernetes clusters using Karbon offline.
The Karbon Airgap eliminates the need for Internet access to manage Kubernetes clusters by
providing required services offline through a local Docker registry. However, deploying the
Airgap requires a device with Internet access to download the Airgap bundle and manifest files
hosted online on the Nutanix Support Portal. After enabling the Karbon Airgap, you can use it
to deploy and manage Kubernetes clusters.
The required Docker registry runs on a VM that hosts the container images required for
deploying Kubernetes clusters using Karbon. Prism Central (PC) manages the registry VM,
which runs in Prism Element (PE). You cannot modify the settings of the registry VM.
See Deploying the Karbon Airgap for deployment steps.
Note:
• If there are any Kubernetes clusters running, you cannot disable the Airgap.
• The Karbon Airgap bundle version should be the same as the Karbon version running
on Prism Central.
• After Airgap is deployed, you cannot modify the network configuration, such as
the subnet or IP address. To modify the network configuration, you must disable
Airgap and re-enable it. However, you cannot disable Airgap while there are existing
Kubernetes clusters deployed. Therefore, you cannot modify the Airgap network
configuration after enabling it initially.
• Plan to use a Linux-based host to extract deployment files and to transfer them to a Linux-
based web server.
• Ensure that your environment meets all Karbon requirements, see Requirements on page 5.
• Ensure that you do not have any Kubernetes clusters deployed.
• Ensure that you have a managed VLAN.
Note: As a best practice, use the full airgap version number (for example, use ntnx-2.1.0).
Note: The airgap VM requires a managed VLAN, even when using a static IP.
Note: Karbon creates the airgap VM using the specified Prism Element cluster and network.
The specified storage container uses a volume group (VG) to store Docker images.
Procedure
1. If you have not already, enable Karbon through Prism Central (PC), see Enabling Karbon on
page 7.
2. In the Support Portal, go to collapse menu icon > Downloads > Karbon to download the
deployment files.
4. To deploy the airgap, replace the values as indicated and the run the following command.
• Replace webserver_url with the URL of the web-server directory hosting the airgap
package.
• Replace network_name with the VLAN network name for the aigrap VM deployment.
• Replace static IP-address with the static IP address for the airgap VM deployment.
• Replace storage_container_name with the name of the storage container the airgap must
use for volume deployment.
Note: If there are any Kubernetes clusters running, you cannot disable the airgap.
Procedure
2. To display the unique universal identifier (UUID) of the airgap, list all airgaps.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl airgap list
Caution: Nutanix recommends adhering to the CIS Benchmark security recommendation for
containers prior to deploying them on a Kubernetes cluster. For details, see the Karbon Release
Notes.
Recommended Configurations
Before configuring your cluster, you can choose from one of the recommended configurations
to streamline the deployment process. You can customize the recommended configurations
during and after deployment.
The recommended configurations include two options: development cluster and production
cluster. The development cluster option provides the minimum resources to launch a cluster.
The production cluster configuration provides enough resources for a high-availability,
production-ready cluster.
Node Configuration
Note: After the cluster configuration, the anti-affinity rules are automatically created for control
planes and worker nodes between VMs.
Note: Do not install Nutanix Guest Tools (NGT) or any other services onto Kubernetes nodes.
Network
You can use Calico or Flannel as the container network interface (CNI). Flannel for Karbon uses
the VXLAN mode. To set up your network, you must provide classless inter-domain routing
(CIDR) ranges and specify the network provider. You can leave the service CIDR and pod CIDR
ranges as default, but the ranges must not overlap with each other or the existing network. The
service CIDR and pod CIDR ranges do not require a VLAN. Karbon does not route the ranges
outside of the cluster.
The number of required physical IP addresses varies depending on the configuration and the
number of control plane, etcd, and worker nodes.
Storage Class
You can have multiple storage classes for your Kubernetes deployment, but once you assign
a storage class to a cluster, you cannot modify it. A storage class defines the properties of a
persistent volume (PV), which provides storage for the cluster. When the cluster needs more
storage, a persistent volume claim (PVC) sends a request for a new PV. The PVC contains
storage class details for the new PV.
Nutanix supports dynamic provisioning of PVs using the CSI Volume Driver, which runs in a
Kubernetes pod. The CSI Volume Driver waits for a PVC request for a storage class and then
creates a PV for that request, see CSI Volume Driver documentation for more details.
When deploying a cluster, Karbon uses Nutanix Volumes storage. After deployment, you can
add more storage classes and use either Nutanix Volumes or Nutanix Files storage.
For storage classes using Volumes, choose a Prism Element cluster. Next, select a storage
container and a file system your cluster will use for allocating storage. Supported file systems
include ext4 and xfs.
For Files, specify an NFS export that the cluster will use for storage.
To optimize the performance of a cluster, enable Flash Mode, which is available for both
Volumes and Files. Enabling Flash Mode boosts the performance of a cluster by storing data
only on the solid-state drives (SSDs) of the hot tier.
Creating a Cluster
Create a Kubernetes cluster in Karbon by configuring cluster settings.
Note: Ensure that you are running a general availability (GA) version of Karbon by performing an
inventory check using the Life-Cycle Manager, see Karbon Upgrades on page 51.
Procedure
• Development Cluster: Used for development clusters only. Do not use this option for
production environments. The option provides the minimum resources needed for a
cluster.
• Production Cluster: Used for production environments. It provides a default
configuration for high-availability clusters.
4. Click Next.
5. In the Name and Environment section, do the following in the indicated fields:
• Kubernetes Cluster Name: Choose a name for your cluster. The name must start with a
letter or a number followed by up to 40 lowercase letters, numbers, or hyphens (cannot
end with a hyphen).
• Nutanix Cluster: Choose which Prism Element cluster to run the Kubernetes cluster on.
• Kubernetes Version: Choose from one of three Kubernetes versions.
• Host OS: Select the version of the downloaded node OS image (centos), see
Downloading Images on page 35.
6. Click Next.
Caution: If you exceed available capacity on any underlying hardware when choosing
memory, CPU, and disk size, the Kubernetes cluster cannot deploy.
Note: IPAM is available for production clusters. IPAM or DHCP is available for
development clusters.
Caution: Using the same managed network and IP addresses as another multi-control
plane deployment results in deployment failure.
Note: IPAM provides IPs for the nodes. The VIP must be part of the same VLAN but
outside of the IP pool, see the "Network" section in Cluster Setup on page 12.
• Number of Control Planes: Enter the number of Kubernetes control plane nodes.
Note: The control plane IPs come from the non-DHCP pool of the AHV network. The
IP addresses auto populate but are modifiable.
• External Load Balancer IP: Enter the IP address for the external load balancer.
• Control Plane IP Addresses: Enter an IP address for each control plane node.
8. Click Next.
The Network page appears.
• Network Provider: Select the network provider from the dropdown menu.
• Service CIDR Range: Enter an IP address range within your network range (RFC-1918) in
CIDR notation, or use the default values. The IP range exposes services.
Caution: The service CIDR and pod CIDR must not overlap with the Kubernetes node
network IP range or with each other. Default ranges can overlap.
• Pod CIDR Range: Enter an IP address range within your network range (RFC-1918) in
CIDR notation for pod-to-pod communication. You can also use the default values.
Kubernetes assigns pods in the cluster an IP address from this range.
Note: Kubernetes configuration files refer to the pod CIDR as a "cluster IP".
Note: You can create more storage classes once you create the cluster, see Creating a
Storage Class on page 36.
Note: During cluster deployment, Karbon uses Nutanix Volumes storage for the storage
class. After deployment, you can create a Nutanix Files storage class for the cluster.
• Storage Class Name: Enter the name for the storage class. The name must start with a
letter or a number. Only use lowercase alphanumeric characters, hyphens, and periods
(maximum 253 characters).
• Nutanix Cluster: Select the target cluster for allocating storage for stateful pods.
• Storage Container Name: Select the storage container for storage.
• Reclaim Policy: The reclaim policy specifies what the cluster does with the volume once
it is not in use. Select Delete or Retain.
• File System: Select the file system for the storage class (xfs or ext4).
• Enable Flash Mode: Check this box to use SSDs for data storage and improved
performance.
Note: Karbon automatically provisions some storage for node and system pod logs based
on the size of the cluster.
Note: After the cluster configuration, the anti-affinity rules are automatically created for
control planes and worker nodes between VMs.
Procedure
1. Go to the Nutanix Developer Portal and create a JSON file using the POST /karbon/v1/k8s/
clusters API code.
2. Update the attributes in a deployment JSON file and save the file. Refer to Cluster
Deployment Attributes on page 21 and Cluster Setup on page 12 for information on
cluster setup.
3. Run the following command to deploy the cluster using the deployment JSON.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster deploy --filepath deployment-JSON-
filepath
Tip: You can find the required cluster and network-specific information in Prism or use the
following commands.
• To list the Prism Element (PE) unique universal identifier (UUID), run the following
command from the PE cluster.
nutanix@CVM:~$ ncli cluster info |grep "Cluster Uuid"
• To list the unique universal identifier (UUID) of the network, run the following
command from the PE cluster.
nutanix@CVM:~$ acli net.list
• To list the name of the container, run the following command from the PE cluster.
nutanix@CVM:~$ ncli ctr list |grep Name|grep -v VStore
prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.
control active_passive_config
external_ipv4_address The IP address of the [String]
planes_config external load balancer.
external_lb_config
external_ipv4_address The IP address of the [String]
external load balancer.
prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.
storage_class_config
default_storage_class The attribute specifies true, false
if the storage class
used for the cluster is
the default.
volumes_config
prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) for
the PE cluster.
Note: Refer
to the Karbon
release notes
for supported
versions.
prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.
• Clusters on page 26
• OS Images on page 32
Clusters
The Clusters tab includes the following sections and options:
The home page also includes a table with the following details of every cluster.
Parameter Description
Name The name of the cluster.
OS Image The node OS image version used for the
cluster.
Nodes The number of worker nodes on the cluster.
Status The status field describes the health of the
cluster, specifies the status of operations,
and notes when there are available upgrades
(Deploying, Healthy, Upgrading, Failed).
Version Kubernetes version.
Clicking any name of a cluster on the Clusters page takes you to the Summary page of
the cluster. The menu pane of the Summary page includes the following tabs: Summary,
Alerts,Tasks, Storage Class, Volume, Add-On, and Nodes.
Summary
Clicking the name of a cluster on the Clusters page takes you to the Summary page. The
Summary page consists of multiple tiles that provide details on the clusters usage, alerts, tasks,
and nodes.
Parameter Description
API Endpoint Control plane server endpoint.
Version Kubernetes version.
Parameter Description
Current Alerts Lists the number of critical, warning, and info
alerts for the last 24 hours
Icon An icon indicating the severity of the alert
appears next to a description of the issue.
View All Alerts Click the View All Alerts link to go to the
Alerts tab.
Parameter Description
Task Lists various operations on the cluster.
Icon Indicates if the task completed successfully.
View All Tasks Click the View All Tasks link to go to the Tasks
tab.
Node Tiles
The node tiles display the following node details for each node type: etcd, control plane, and
worker.
Parameter Description
Alerts
The Alerts tab consists of a table that lists and describes recent alerts for the cluster. The
Prometheus add-on powers Karbon alerts, see Monitoring on page 63.
The alerts table consists of the following parameters
Parameter Description
Alert Name The name of the alert.
Severity The severity of the alert.
Message A description of the alert issue.
Time Created The date and time when the alert was created.
Status The current status for the alert.
Tasks
The Tasks page consists of a table that lists and describes recent tasks performed on the
cluster.
The tasks table consists of the following parameters.
Parameter Description
Name The name of the task including the entity.
Percentage Complete The progress of the task displayed as a
percentage.
Status The status of the task.
Time Created The date and time of task creation.
Parameter Description
Name The name of the storage class.
Volume Type The type of storage used for Persistent
Volumes.
Default Storage Class Specifies if the storage class is the default
(True or False).
Cluster The name of the underlying cluster.
Reclaim Policy The reclaim policy specifies what happens
to the volume once it is not use (delete or
retain).
Volume
The Volume tab includes Create Volume and Delete Volume action buttons, see Creating
a Volume on page 38. The Volume tab also includes a table with details about each
PersistentVolume (PV), see table below for details.
Parameter Description
Claim Name Name used to attach storage to pods
(maximum 253 characters). The name must
start with a letter or a number. Only use
lowercase alphanumeric characters, hyphens,
and periods.
Namespace Selected Kubernetes namespace (available
options are displayed).
Storage Class Storage class used by the volume.
Size The size of the volume in GiB.
Status Persistent volume status (Pending, Available,
Bound, Release, Failed).
Access Mode ReadWriteOnce is the only available access
mode for Nutanix Volumes storage. Nutanix
Files storage supports both ReadWriteMany
and ReadWriteOnce access modes.
Add-On
The Add-on page lists the add-ons installed on the Kubernetes cluster, see Add-Ons on
page 62. Refer to the table below for a description of the Add-on tab parameters.
Parameter Description
Name Name of the add-on.
State Status of the add-on
Size Disk space allocated to the add-on (GiB)
Version Describes the version of the add-on, and
includes a Launch add-on action link.
Nodes
The Nodes tab provides three subtabs for each type of node in the cluster: Control plane,
worker, and etcd. Each tab consists of a table with details about the indicated node type. The
Parameter Description
Parameter Description
OS Images
The OS Images tab provides an overview of available and downloaded node OS images on your
cluster. When a new version of an image is available, the tab includes options to download the
new OS image. For steps to download a new image, see Downloading Images on page 35.
Note: All supported OS images are provided by Karbon during and after deployment. Refer to
the Karbon Release Notes for information on image compatibility.
After downloading a new image from the OS images tab, upgrade cluster images through the
Clusters tab, see Upgrading a Node OS Image on page 52.
The OS Images tab includes a table that describes the available, downloaded, and deleted
images in your environment. Refer to the table below for details.
Parameter Description
Image Version The Linux distribution and image version.
Release Notes A link to the release notes for the indicated OS
image.
Size The size of the OS image (GiB).
Download Status The status of the image on your Karbon
deployment (Download, Downloading, and
Downloaded).
Note: After the Karbon cluster is deployed, there must be no manual modification of Kubernetes
components (such as etcd, kubelet, apiserver, and so on) or add-ons (such as DNS, Prometheus,
Elastic Search, and so on). The resource values for the logging and monitoring stack alone can be
modified only when required.
Procedure
1. In the Clusters view, select a cluster from the list by checking the adjacent box.
» Click Copy the command to clipboard and run the command to finish the download
process.
» Manually run the required commands (continue to step 6).
6. Replace the path to the prod1-kubectl.cfg file with the file path to the downloaded file as it
appears in your directory. Run the following command on your host.
$ export KUBECONFIG=/path/to/prod1-kubectl.cfg
Procedure
Log on to karbonctl from a PC cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl login --pc-username username \
--pc-password password cc
Downloading Images
About this task
Deploying Kubernetes clusters in Karbon requires a CentOS image. You must choose from a
CentOS version and download the image.
Procedure
Note: Changes to the Cluster Virtual IP or admin username in Prism Element affect the storage
class configuration. For Cluster Virtual IP and username update assistance, contact Nutanix
Support.
Note: The first time you create a cluster, you also create the default storage class. Refer to the
Kubernetes documentation for directions on changing the default storage class.
Procedure
Note: Some storage is automatically provisioned for node and system pod logs based on the
size of the cluster.
a. Continue to step 6.
Note: Some storage is automatically provisioned for node and system pod logs based on the
size of the cluster.
6. Click Create.
Procedure
3. Check the box for the storage class you want to delete.
Creating a Volume
Create a Persistent Volume (PV) for your cluster.
5. Click Create.
Deleting a Volume
Procedure
Procedure
Note: Only use alphanumeric characters or the hyphen (-) special character.
• CPU: Enter an integer for the amount of CPU allocated per node.
• Memory: Enter an integer for the amount of memory allocated per node (GiB).
• Storage: Enter an integer for the amount of storage allocated per node (GiB).
• Node Pool Network: Choose the primary network for the nodes. Use any network of
the Prism Element (PE) cluster that has connectivity with the Karbon cluster.
• (optional) Additional Network: The iSCSI network used for I/O optimization.
• Metadata
Use key and value pairs to add meaningful labels to the node pool.
• Key: Enter a descriptive label for the key (for example, environment).
Note: Only use alphanumeric characters or the hyphen (-) special character.
• Value: Enter a descriptive label for the value (for example, dev).
• Click +.
• (optional) To add more key value pairs, click +.
5. Click Add.
Procedure
Note:
• If you want to delete a worker node, select the option Actions > Delete. Note that
you cannot delete the default node pool.
• If the last worker node cannot be deleted from the Karbon UI, then that worker
node cannot be deleted from Prism Element as well.
5. Under Number of Nodes, indicate the desired number of worker nodes for the node pool.
For example, if you currently have three worker nodes in the node pool but you want to have
a total of five nodes, click + so that the Number of Nodes is 5.
Note: Reducing the number of nodes might delete the most recently added nodes.
6. Click Resize.
Procedure
• Key: Enter a descriptive label for the key (for example, environment).
Note: Only use alphanumeric characters or the hyphen (-) special character.
• Value: Enter a descriptive label for the value (for example, dev).
• Click +.
• (optional) To add more key value pairs, click +.
• (optional) To delete a key value pair, click the delete label icon.
6. Click Update.
Note: GPU enablement requires installation of NVIDIA datacenter driver software governed by
NVIDIA licensing terms.
Procedure
1. List the pass-through GPU configurations associated with the Prism Element (PE) cluster of
the Karbon cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster gpu-inventory list –cluster-name name-
of-karbon-cluster
• Specify the name or the universally unique identifier (UUID) of the VLAN: --vlan-name name
or --vlan-uuid UUID
What to do next
Install the GPU operator. Go to nutanix.dev for guidance.
Procedure
5. When prompted, enter the IP of any node in the cluster to get access to all nodes.
Karbon grants the user a private key.
6. Log on to the target node as a Nutanix user using the command line.
Rotating Certificates
Update certificates for cluster services and add-ons.
Procedure
Initiate certificate rotation.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster certificates rotate-cert --cluster-
name k8-cluster-name
Procedure
Initiate certificate rotation.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster certificates rotate-cert --cluster-
name k8-cluster-name
Procedure
Backing up Etcd
Back up your Karbon Kubernetes cluster etcd data regularly.
Important: The following procedure does not back up the application data.
Procedure
a. In Karbon, select the target cluster, expand Nodes in the sidebar, and click etcd.
b. Note the IP addresses of each etcd node.
a. Securely log on to a single etcd node using SSH (see Accessing Locked Nodes on
page 43).
b. Use sudo to become the root user.
$ sudo su
d. Copy /root/snapshot.db to a safe location. For example, copy the file using SFTP (Secure
File Transfer Protocol) or SCP (Secure Copy Protocol).
• Stop the apps and pods in the Karbon cluster. This helps reduce the possibility of data
corruption.
• Take a backup of the Kubernetes cluster etcd data (see Backing up Etcd on page 45).
Procedure
1. Collect the cluster name and IP address of one of the control plane nodes.
a. In Karbon, select the target cluster, expand Nodes, and click Control Plane.
b. Note the IP address of one control plane node.
a. Cordon the worker nodes to prevent the scheduler from placing new pods onto the
nodes.
$ kubectl cordon -l 'kubernetes.io/role=node'
a. In Prism Central, click the hamburger menu, expand Compute & Storage, and then click
VMs.
b. Click Filters, select the Name checkbox, change the option to Starts with, and type the
following:
karbon-kubernetes_cluster_name-
Where, kubernetes_cluster_name- is the name of the Karbon cluster.
a. Ensure that the filter is only showing the desired Kubernetes cluster nodes.
b. Select the virtual machines with the text worker in their name.
a. With the worker nodes selected, click Actions and then click Soft Shutdown.
b. Click OK to confirm.
5. Once the worker nodes are powered off, perform step 3 for control plane nodes.
6. Once the control plane nodes are powered off, perform step 3 for etcd nodes.
Procedure
a. In Prism Central, click the hamburger menu, expand Compute & Storage, and then click
VMs.
b. Click Filters, select the Name checkbox, change the option to Starts with, and type the
following:
karbon-kubernetes_cluster_name-
Where, kubernetes_cluster_name- is the name of the Karbon cluster.
a. Ensure that the filter is only shows the desired Kubernetes cluster nodes.
b. Select the virtual machines with the text etcd in their name.
a. With the etcd nodes selected, click Actions followed by Power On.
b. Click OK to confirm.
a. In Karbon, select the target cluster, expand Nodes in the sidebar, and click etcd.
b. Note the IP addresses for each etcd node.
4. Securely log on to an etcd node using SSH, see Accessing Locked Nodes on page 43.
The output generated should be similar to the following image, displaying three etcd nodes
with one of them as the leader.
Figure 20: Starting a Karbon Cluster: verifying services on all etcd nodes
9. Use SSH to log on to one of the control plane nodes (see Accessing Locked Nodes on
page 43) to check the control plane status.
a. To verify that the control plane nodes are up, use the kubectl to check for the Ready
status.
$ watch kubectl get nodes
b. If the control plane nodes are NotReady, then check whether there are connection issues
between the control plane nodes and etcd nodes.
a. To verify that all worker nodes are up, use the kubectl to check for the Ready,
SchedulingDisabled status.
$ watch kubectl get nodes
c. To verify that all pods are up, use the kubectl to check for the Running status.
$ watch kubectl get pods --all-namespaces
12. In the Karbon UI, verify that the cluster status is healthy.
8
UPGRADES
There are two different types of Karbon upgrades:
• Karbon version upgrades using the Life Cycle Management feature, see Karbon Upgrades on
page 51.
• Node OS image upgrade, see Upgrading a Node OS Image on page 52.
Perform LCM upgrades through Prism Central (PC). Karbon is part of the PC upgrades module
in LCM. LCM upgrades the following Karbon components:
• Karbon version
• Karbon UI
Perform node OS image upgrades through Karbon. When a node OS image upgrade is
available, Karbon displays an option to download the new image in the OS Images tab, see OS
Images on page 32. Karbon also displays an Upgrade Available icon next to the cluster in the
Clusters view, see Upgrading a Node OS Image on page 52.
Karbon Upgrades
To check the current version of Karbon or to upgrade to later versions, perform the inventory
check in Prism Central using LCM.
For steps on performing inventory and upgrades in LCM, refer to the Life Cycle Manager Guide.
Ensure that you are running a compatible version of Prism Central (PC), Prism Element, and
AOS, see the Karbon Release Notes for compatibility details.
Note:
• Upgrade to a compatible version of Prism Central using LCM in Prism Central, see the most
recent version of the Life Cycle Management Guide.
Note: You cannot upgrade to a GA version of Karbon without first deleting clusters created
during technical preview. An error message shows up
• Perform an inventory and upgrade to a compatible version of PC using the Life Cycle
Management (LCM) feature in PC see the Life Cycle Management Guide.
• Enable Karbon, see Enabling Karbon on page 7 .
• In LCM, perform inventory and update Karbon to an available GA version.
Caution:
• Upgrading a node OS image clears the contents of the /dev/sda boot disk. Ensure
that the /dev/sda disk does not contain any persistent content or files.
• Avoid using local storage pods. Upgrading a node OS image deletes the data in the
local storage.
Note: Karbon supports legacy images on existing clusters. Existing Kubernetes clusters do not
require an image upgrade.
Procedure
Note: Clusters that have an image eligible for an upgrade display the Upgrade Available icon.
5. In the Upgrade Host Image OS window, click Download to download the target image. Wait
for the image to download.
Caution: Avoid using local storage pods. Upgrading Kubernetes deletes the data in the local
storage.
Procedure
Note: Clusters that have a Kubernetes version eligible for an upgrade display the Upgrade
Available icon in the table.
4. Under List of Available Kubernetes Version for Upgrade, select the target Kubernetes
version.
Procedure
Note: The package consists of Kubernetes and add-on versions. Currently, Karbon only
supports the Kubernetes version.
What to do next
Check the upgrade status in the Tasks view, or using karbonctl.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster k8s upgrade status --cluster-name cluster
name
Note: Karbon airgap upgrades using Prism Central (PC) 5.16.1.2. cause an error. Upgrade PC
5.16.1.2. before attempting to upgrade the airgap.
Procedure
1. From a device that has public Internet access, go to Nutanix Portal > Downloads > LCM.
a. Next to LCM Dark Site Bundle (version), click Download to download the
lcm_dark_site_bundle_version.tar.gz. file.
a. Next to LCM Darksite bundle for Karbon , click Download to download the lcm-
darksite_karbon-builds_version-number.tar.gz file.
a. In the Fetch updates from field, enter the path to the directory where you extracted the
tar file on your local server. Use the format http://webserver_IP_address/release.
b. Click Save.
You return to the Life Cycle Manager.
c. In the LCM sidebar, click Inventory > Perform Inventory.
d. Update the LCM framework before trying to update any other component.
The LCM sidebar shows the LCM framework with the same version as the file you
downloaded.
Note: As a best practice, use the full version number (for example, 2.0.0 or 2.0.1).
Procedure
1. From a device that has public Internet access, go to Support Portal > collapse menu icon >
Downloads > Karbon.
2. Under Download Karbon Airgap bundle and manifest files, do the following.
Tip: Verify that the Karbon version in the airgap-manifest.json is for a version that
includes new images.
5. Upload the new images to the airgap. Replace airgap-UUID with the universally unique
identifier for the airgap. Replace webserver-directory-URL with the URL for the directory you
transferred files to in step 3.
Procedure
Note: If you have enabled transport-level security (TLS), specify the ca-cert-path, client-
cert-path, and key-path variables.
2. Disable alert forwarding. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster alerts disable-smtp --cluster-name="cluster-name"
Procedure
2. Disable infra logging. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster infra-logging disable --cluster-name="cluster-name"
Procedure
2. Enable infra logging. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster infra-logging enable --cluster-name="cluster-name"
Note: If you want to add user authentication after registry creation, delete the registry and
create a new one with the desired authentication.
• If the registry is certificate-based, replace cert-filepath with the file path to the certificate.
Note: Omit the cert-filepath parameter for HTTP-based registries. The Docker
configuration supports insecure registries.
Note: To configure a private registry that use token authentication instead of certificates,
please contact Nutanix support.
3. Add the private registry to a Kubernetes cluster. Replace cluster-name with the name of the
Kubernetes cluster. Replace registry-name with the name of the registry (as in step 1).
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry add --cluster-name \
cluster-name --registry-name registry-name
4. Confirm that the Karbon and the Kubernetes clusters have access to the custom registry.
Replace cluster-name with the name of the Kubernetes cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry list --cluster-name cluster-
name
1. Delete access to a private registry from your Kubenetes cluster. Replace cluster-name with
the name of the Kubernetes cluster, and replace the registry-name with the name of the
target registry.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry delete --cluster-name cluster-
name \
--registry-name registry-name
2. Note: Before deleting the registry, revoke registry access from all clusters.
Delete a private registry from Karbon. Replace the registry-name with the name of the target
registry.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl registry delete --registry-name registry-name
Procedure
Note: If you have enabled transport-level security (TLS), specify the ca-cert-path, client-
cert-path, and key-path variables.
Procedure
2. Disable log forwarding. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster log-forward disable --cluster-name=cluster-name
Network Segmentation
Segregate Nutanix Volumes iSCSI traffic from other traffic.
Important: Before using network segmentation with Karbon, enable network segmentation for
Nutanix Volumes, see "Service-Specific Traffic Isolation" in the AOS Security Guide.
Important: Karbon only supports using network segmentation for container workloads
with Nutanix Volumes when specifically configured as part of a new storage class. See the
isSegmentedIscsiNetwork parameter in Creating a Storage Class (Nutanix Volumes) topic in
CSI Volume Driver 2.5. Network segmentation is not enabled for communication to volumes
supporting etcd and other default services. Configuring network segmentation on a cluster that
uses both Nutanix Files and Nutanix Volumes storage provides a dedicated network to Nutanix
Volumes traffic only.
Logging
The Kibana data-visualization plugin is the Karbon logging add-on.
The Kibana dashboard has a custom tab for the LogTrail plugin (not available on Kubernetes
version 1.20 and above), which displays data for the selected namespaces. By default, LogTrail
is configured to display logs for the system namespaces of the Kubernetes cluster: kube-system
and ntnx-logging.
Access the add-on through the Karbon UI, as access to pods is restricted.
Settings
The Settings filter displays logs for the selected entity. Kibana displays the hostname in orange
and the pod name in blue. Click the colored text to filter by pod or hostname.
You can select the following logging options from the Settings tab:
Setting Description
kubernetes-* (default) Displays logs for pods running in the kube-
system and ntnx-logging namespaces.
systemd-* Displays logs for the kubelet control plane and
worker services of every node.
etcd-* Displays logs from etcd services running on
etcd VMs.
All Systems
By default, the All Systems tab displays logs from all nodes. You can also use it to filter the
display to only show logs for specific nodes.
Monitoring
The built-in Prometheus add-on provides monitoring for Kubernetes clusters. Prometheus
scans clusters for health and consumption, provides data for metrics, and triggers alerts and
notifications that appear in the Karbon Console.
Prometheus feeds data to the alerts tab in the Karbon user interface (UI).