Karbon-V2 4 Compressed

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

Nutanix Kubernetes Engine (formerly Karbon) 2.

Nutanix Karbon Guide


February 8, 2023
Contents

1. Karbon Overview...................................................................................................... 4
Kubernetes....................................................................................................................................................................4

2. Requirements.............................................................................................................5

3. Enabling Karbon.......................................................................................................7

4. Airgap...........................................................................................................................9
Deploying the Karbon Airgap.............................................................................................................................. 9
Disabling the Airgap................................................................................................................................................ 11

5. Cluster Setup............................................................................................................12
Creating a Cluster.................................................................................................................................................... 14
Creating a Cluster Using the Karbon API..................................................................................................... 20
Cluster Deployment Attributes...............................................................................................................21

6. Karbon Layout........................................................................................................26
Clusters........................................................................................................................................................................ 26
Summary......................................................................................................................................................... 27
Alerts................................................................................................................................................................ 28
Tasks................................................................................................................................................................. 29
Storage Class................................................................................................................................................30
Volume.............................................................................................................................................................30
Add-On............................................................................................................................................................. 31
Nodes................................................................................................................................................................ 31
OS Images.................................................................................................................................................................. 32

7. Cluster Administration.........................................................................................34
Downloading the Kubeconfig............................................................................................................................ 34
Logging on to the Karbonctl............................................................................................................................. 35
Downloading Images............................................................................................................................................. 35
Creating a Storage Class..................................................................................................................................... 36
Deleting a Storage Class......................................................................................................................... 38
Creating a Volume.................................................................................................................................................. 38
Deleting a Volume...................................................................................................................................... 39
Creating a Node Pool........................................................................................................................................... 40
Updating the Number of Worker Nodes............................................................................................41
Updating Node Pool Metadata.............................................................................................................. 41
Configuring GPU Support....................................................................................................................................42
Access and Authentication................................................................................................................................. 43
Accessing Locked Nodes........................................................................................................................ 43
Rotating Certificates..............................................................................................................................................44
Migrating DVP and CSI to Use Certificate-Based Authentication....................................................... 44

ii
Restarting the Karbon Service.......................................................................................................................... 45
Backing up Etcd...................................................................................................................................................... 45
Stopping a Karbon Cluster................................................................................................................................. 46
Starting a Karbon Cluster........................................................................................................................47

8. Upgrades....................................................................................................................51
Karbon Upgrades..................................................................................................................................................... 51
Upgrading a Node OS Image.............................................................................................................................52
Upgrading Kubernetes.......................................................................................................................................... 53
Upgrading Kubernetes Using the Karbonctl....................................................................................53
Upgrading the Karbon Airgap........................................................................................................................... 54
Updating OS Images and Kubernetes for Airgap...................................................................................... 55

9. Options...................................................................................................................... 57
Enabling Alert Forwarding.................................................................................................................................. 57
Disabling Alert Forwarding..................................................................................................................... 57
Disabling Infra Logging........................................................................................................................................ 58
Enabling Infra Logging............................................................................................................................. 58
Configuring a Private Registry.......................................................................................................................... 58
Deleting a Private Registry.....................................................................................................................59
Enabling Log Forwarding....................................................................................................................................60
Disabling Log Forwarding........................................................................................................................ 61
Network Segmentation.......................................................................................................................................... 61

10. Add-Ons.................................................................................................................. 62
Logging........................................................................................................................................................................62
Monitoring.................................................................................................................................................................. 63

iii
1
KARBON OVERVIEW
Nutanix Karbon is a curated turnkey offering that provides simplified provisioning and
operations of Kubernetes clusters. Kubernetes is an open source container orchestration
system for deploying and managing container-based applications. You can also set up an offline
Karbon environment using the Karbon airgap, see Airgap on page 9.
Karbon uses the CentOS Linux-based operating systems for Karbon-enabled Kubernetes cluster
creation. Linux containers provide the flexibility to deploy applications in different environments
with consistent results.
Karbon streamlines the deployment and management of Kubernetes clusters with a simple GUI
integrated into Prism Central (PC). Kibana, the built-in add-on, lets you filter and parse logs for
systems, pods, and nodes. Prometheus, another add-on, provides a monitoring mechanism that
triggers alerts on your cluster. Karbon also uses Pulse, Prism's health-monitoring system, which
interacts with Nutanix Support to expedite cluster issue resolutions.
To set up your Karbon environment, perform the following tasks:

• Ensure that your environment meets the requirements, see Requirements on page 5.
• Enable Karbon through Prism Central (PC) and set up a cluster, see Cluster Setup on
page 12.
• Download the kubeconfig, see Downloading the Kubeconfig on page 34.
• Configure access, see Access and Authentication on page 43.

Kubernetes
Karbon orchestrates Kubernetes clusters to simplify the provisioning and management
of containerized applications. Kubernetes packages applications in their own dedicated
containers together with all of the required operational components for running the application.
Containers , which run inside of pods on top of nodes, are the core building block of Kubernetes
architecture.
Containerized applications are simple to manage, easy to deploy, and portable as they are
abstracted from the OS of the host. Since different containers share an operational kernel, they
do not require as much compute capacity as a VM, making them "lightweight".
Using Karbon to manage Kubernetes operations requires a basic familiarity with key Kubernetes
concepts.
Reference Kubernetes documentation to gain a better understanding of containerization,
cluster architecture, workloads, and storage concepts.
2
REQUIREMENTS
Meet the indicated requirements prior to enabling and deploying Karbon.

Cluster Requirements
Ensure that the configuration of the Prism Element (PE) cluster meets the following
specifications:

• AHV
• A minimum 120 MB of memory and 700 MB of disk space in PC
Do the following before deploying Karbon:

• See Karbon Software Compatibility in the Nutanix Karbon Release Notes for Prism Central
(PC) and Prism Element (PE) compatibility requirements.
• Register the PE cluster to PC.
• Configure the cluster virtual IP address and the iSCSI data services IP address on the
designated PE cluster.

Note: Karbon does not recognize changes to the iSCSI Data Service IP.

• Configure the Network Time Protocol (NTP) and the Domain Name System (DNS) in PE and
PC.
• Synchronize the time of the cluster, PC, and the clients that use kubectl.

Note: Karbon requires using the UTC timezone.

• Open the required ports and whitelist the required domains, see table below.
• (Production clusters only) Configure an AHV network with IP address management (IPAM)
enabled and IP pools configured.
• (Development clusters only) Configure an AHV network with IPAM and IP address pools or
with an external DHCP network.
• (Airgap only) ensure you are using a Linux-based web server.

Note:

• IPAM is for Kubernetes cluster deployment.


• The DHCP pool is for the worker nodes and the master nodes in a Virtual Router
Redundancy Protocol (VRRP) environment.
• The Kubernetes nodes being deployed and the Airgap VM must be in the same
subnet.

Nutanix Kubernetes Engine (formerly Karbon) | Requirements | 5


Firewall Requirements
Karbon only supports HTTP unauthenticated proxy. Use the IP or the fully qualified domain
name (FQDN) format. Ensure that your firewall allows Karbon VMs and CVMs to reach the
below domains and subdomains. Also, exclude the following domains from the security-sockets
layer (SSL) inspection in the firewall.

• *.cloudfront.net

• *.quay.io

• ntnx-portal.s3.amazonaws.com

• portal.nutanix.com

• release-api.nutanix.com

• s3*.amazonaws.com

• *.compute-1.amazonaws.com

Note: Karbon does not automatically recognize changes to proxy settings, which can cause
cluster unavailability. Modifying proxy settings requires restarting the Karbon service on the
cluster, see Restarting the Karbon Service on page 45.

Port Requirements
For a list of required Karbon and Karbon Airgap ports go to Support & Insights Portal >
Documentation > Ports and Protocols. The Port Reference on the Ports and Protocols page
provides detailed port information for Nutanix products and services, including port sources
and destinations, service descriptions, directionality, and protocol requirements.
3
ENABLING KARBON
Enable Karbon through Prism Central.

Before you begin


Ensure to meet all requirements prior to enabling Karbon, see Requirements on page 5.

Procedure

1. Log on to Prism Central.

2. Click the collapsed menu icon.

3. In the Services option, click Karbon.

Figure 1: Enable Karbon Window

4. Click Enable Karbon in the main console window.

Note: Karbon might take several minutes to launch.

5. To access Karbon, once you receive the message Karbon is successfully enabled, click the
here link in the message.

Figure 2: Go To Karbon Console Window

Nutanix Kubernetes Engine (formerly Karbon) | Enabling Karbon | 7


6. In Karbon, a message directs you to upload a node OS image, see Downloading Images on
page 35.

What to do next
Ensure that you are running a general availability (GA) version of Karbon by performing an
inventory check using the Life-Cycle Manager (LCM), see Karbon Upgrades on page 51. If
you are using LCM at a dark site, see Upgrading the Karbon Airgap on page 54.
4
AIRGAP
Use the Airgap to manage Kubernetes clusters using Karbon offline.
The Karbon Airgap eliminates the need for Internet access to manage Kubernetes clusters by
providing required services offline through a local Docker registry. However, deploying the
Airgap requires a device with Internet access to download the Airgap bundle and manifest files
hosted online on the Nutanix Support Portal. After enabling the Karbon Airgap, you can use it
to deploy and manage Kubernetes clusters.
The required Docker registry runs on a VM that hosts the container images required for
deploying Kubernetes clusters using Karbon. Prism Central (PC) manages the registry VM,
which runs in Prism Element (PE). You cannot modify the settings of the registry VM.
See Deploying the Karbon Airgap for deployment steps.

Note:

• If there are any Kubernetes clusters running, you cannot disable the Airgap.
• The Karbon Airgap bundle version should be the same as the Karbon version running
on Prism Central.
• After Airgap is deployed, you cannot modify the network configuration, such as
the subnet or IP address. To modify the network configuration, you must disable
Airgap and re-enable it. However, you cannot disable Airgap while there are existing
Kubernetes clusters deployed. Therefore, you cannot modify the Airgap network
configuration after enabling it initially.

Deploying the Karbon Airgap


This procedure describes steps for deploying the Karbon airgap.

About this task


Deploying the airgap requires downloading the required files from the Nutanix Support Portal
and transferring them to a local web server.

Before you begin

• Plan to use a Linux-based host to extract deployment files and to transfer them to a Linux-
based web server.
• Ensure that your environment meets all Karbon requirements, see Requirements on page 5.
• Ensure that you do not have any Kubernetes clusters deployed.
• Ensure that you have a managed VLAN.

Nutanix Kubernetes Engine (formerly Karbon) | Airgap | 9


• Create directory ntnx-version-number on a local web server.

Note: As a best practice, use the full airgap version number (for example, use ntnx-2.1.0).

• Log on to Prism Element and get the following details:

Note: The airgap VM requires a managed VLAN, even when using a static IP.

• Network name ( in Network view)


• Storage container name ( in Storage view)
• Prism Element cluster name ( in Home view)

Note: Karbon creates the airgap VM using the specified Prism Element cluster and network.
The specified storage container uses a volume group (VG) to store Docker images.

Procedure

1. If you have not already, enable Karbon through Prism Central (PC), see Enabling Karbon on
page 7.

2. In the Support Portal, go to collapse menu icon > Downloads > Karbon to download the
deployment files.

a. To download the airgap-ntnx-version-number.tar.gz file, next to Karbon Airgap bundle,


click Download.
b. To download the airgap-manifest.json file, click Metadata under the Download button.

c. Transfer the airgap-ntnx-version-number.tar.gz and airgap-manifest.json deployment


files to the version-number directory on your local web server.
d. Untar the airgap-ntnx-version-number.tar.gz file.
The following deployment files and directories appear: host-images, host-images.json,
ntnx-k8s-releases, ntnx-k8s-releases.json.

3. Log on to karbonctl, see Logging on to the Karbonctl on page 35.

4. To deploy the airgap, replace the values as indicated and the run the following command.

nutanix@pcvm$ /home/nutanix/karbon/karbonctl airgap enable \


--webserver-url http://webserver_url/ntnx-version-number/ \
--vlan-name network_name --static-ip static ip-address \
--storage-container storage_container_name \
--pe-cluster-name PE_cluster_name

• Replace webserver_url with the URL of the web-server directory hosting the airgap
package.
• Replace network_name with the VLAN network name for the aigrap VM deployment.
• Replace static IP-address with the static IP address for the airgap VM deployment.
• Replace storage_container_name with the name of the storage container the airgap must
use for volume deployment.

Nutanix Kubernetes Engine (formerly Karbon) | Airgap | 10


• Replace PE_cluster_name with the name of the Prism Element (PE) cluster.
The airgap deployment process begins. You can track the deployment progress in the Tasks
tab in PC.

Disabling the Airgap


Use the karbonctl to disable the airgap..

About this task

Note: If there are any Kubernetes clusters running, you cannot disable the airgap.

Procedure

1. Log on to the karbonctl, see Logging on to the Karbonctl on page 35.

2. To display the unique universal identifier (UUID) of the airgap, list all airgaps.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl airgap list

3. Replace the UUID with the UUID for the airgap.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl airgap disable --airgap-uuid UUID

Nutanix Kubernetes Engine (formerly Karbon) | Airgap | 11


5
CLUSTER SETUP
Creating a cluster in Karbon consists of setting up the following:

• Recommended configurations (optional)


• Name and environment
• Node configuration
• Network
• Storage class
For step-by-step directions for creating a Kubernetes cluster, see Creating a Cluster on
page 14.

Caution: Nutanix recommends adhering to the CIS Benchmark security recommendation for
containers prior to deploying them on a Kubernetes cluster. For details, see the Karbon Release
Notes.

Recommended Configurations
Before configuring your cluster, you can choose from one of the recommended configurations
to streamline the deployment process. You can customize the recommended configurations
during and after deployment.
The recommended configurations include two options: development cluster and production
cluster. The development cluster option provides the minimum resources to launch a cluster.
The production cluster configuration provides enough resources for a high-availability,
production-ready cluster.

Name and Environment


To set up the Kubernetes environment, you must name the new cluster and add an optional
description. The name must start with a letter or a number followed by up to 40 lowercase
letters, numbers, or hyphens; (the name cannot end with a hyphen). You must specify the Prism
Element (PE) cluster, the Kubernetes version, and the node operating system (OS) image.
Run the Kubernetes cluster on any PE cluster that meets the requirements and has enough
resources for the desired configuration.
Download a node OS image prior to deploying a cluster, see Downloading Images on
page 35. Karbon supports the CentOS Linux distribution. See Requirements on page 5 for a
list of supported versions.

Node Configuration

Note: After the cluster configuration, the anti-affinity rules are automatically created for control
planes and worker nodes between VMs.

Note: Do not install Nutanix Guest Tools (NGT) or any other services onto Kubernetes nodes.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 12


To configure worker node, control plane, and etcd node resources, you must first specify
the Kubernetes node network. The Kubernetes node network determines the network that
the nodes run on. Optionally, you can also specify an additional node network to configure a
dedicated network to optimize storage traffic. The additional node network is specific to the I/
O paths for the worker nodes and is responsible for any data or I/O optimization. Then, all the
primary data management is handled by the primary node network while the data traffic travels
through the secondary network. If the secondary network is not configured, then all the data
traffic travels through the single primary node network. The purpose of the secondary network
is to segment I/O traffic for optimization so that data travels through the secondary network in
case of excess data traffic.
Next specify the number of nodes and the virtual CPU (vCPU), memory, and size resources for
each node type.
Worker nodes are responsible for services that run pods in a Kubernetes cluster. These services
include Flannel for controlling traffic between nodes; Fluent Bit for system, pod, and node log
collection; kubelet for running node operations; and the kube-proxy.
Etcd nodes store cluster configuration data and status details.
Control plane nodes run critical services including the API server, controller-manager, and kube-
scheduler. Specify the control plane resource configuration from one of the following: single-
control plane, active-passive, or external load balancer.

Note: Do not use single-control plane deployments in production environments.

Single-control plane deployments have preconfigured resources. When a single-control plane


node is unavailable due to upgrades, bugs, or maintenance, the scheduling service is also
unavailable and Karbon cannot deploy new pods. Multi-control plane deployments circumvent
these issues by providing high-availability.
For multi-control plane deployments, you have the option of using an external load-balancer,
which you must administer prior to deployment; or you can choose the Virtual Router
Redundancy Protocol (VRRP), which Karbon provides when you select the active-passive multi-
control plane configuration.
Multi-control plane deployments backed by an external load balancer let nodes perform
workload deployment and management at the same time, creating scalability by supporting
more workloads. A load balancer requires multiple IP addresses: one external IP address and
one private IP address for each control plane node. The kubectl, kubelet, and Kubernetes
controllers use the public IP to communicate with the API server.
VRRP, the active-passive multi-control plane configuration, is an alternative to using a load
balancer. VRRP does not provide the same scalability, but it does provide a backup control
plane node for accessing the cluster if the active control plane happens to go down. Thus, when
you select the active-passive configuration, the number of control plane nodes is always two.

Network
You can use Calico or Flannel as the container network interface (CNI). Flannel for Karbon uses
the VXLAN mode. To set up your network, you must provide classless inter-domain routing
(CIDR) ranges and specify the network provider. You can leave the service CIDR and pod CIDR
ranges as default, but the ranges must not overlap with each other or the existing network. The
service CIDR and pod CIDR ranges do not require a VLAN. Karbon does not route the ranges
outside of the cluster.
The number of required physical IP addresses varies depending on the configuration and the
number of control plane, etcd, and worker nodes.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 13


For development environment configurations, Karbon requires a dynamic host configuration
protocol (DHCP), or a network with AHV IP address management (IPAM) enabled and IP pools
setup.
For production environment configurations, IP requirements vary depending on the mode. For
the active-passive mode, IPAM provides IPs for all of the nodes, and you must provide a virtual
IP (VIP) for the Kubernetes API server. The VIP must be part of the same VLAN but outside of
the IP pool. For an active-active mode, pre-configure the external load balancer and control
plane nodes with designated IP addresses.

Storage Class
You can have multiple storage classes for your Kubernetes deployment, but once you assign
a storage class to a cluster, you cannot modify it. A storage class defines the properties of a
persistent volume (PV), which provides storage for the cluster. When the cluster needs more
storage, a persistent volume claim (PVC) sends a request for a new PV. The PVC contains
storage class details for the new PV.
Nutanix supports dynamic provisioning of PVs using the CSI Volume Driver, which runs in a
Kubernetes pod. The CSI Volume Driver waits for a PVC request for a storage class and then
creates a PV for that request, see CSI Volume Driver documentation for more details.
When deploying a cluster, Karbon uses Nutanix Volumes storage. After deployment, you can
add more storage classes and use either Nutanix Volumes or Nutanix Files storage.
For storage classes using Volumes, choose a Prism Element cluster. Next, select a storage
container and a file system your cluster will use for allocating storage. Supported file systems
include ext4 and xfs.
For Files, specify an NFS export that the cluster will use for storage.
To optimize the performance of a cluster, enable Flash Mode, which is available for both
Volumes and Files. Enabling Flash Mode boosts the performance of a cluster by storing data
only on the solid-state drives (SSDs) of the hot tier.

Creating a Cluster
Create a Kubernetes cluster in Karbon by configuring cluster settings.

About this task


For details on setup options and requirements, see Cluster Setup on page 12.

Note: Ensure that you are running a general availability (GA) version of Karbon by performing an
inventory check using the Life-Cycle Manager, see Karbon Upgrades on page 51.

Note: Constrained bandwidth can lead to cluster deployment timeout.

Procedure

1. Go to Karbon from Prism Central.

a. Click the collapsed menu icon.


b. Go to Services > Karbon.

2. Click + Create Kubernetes Cluster.


The Create Cluster page appears.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 14


3. In the Recommended Configurations tab, choose one of the recommended configuration
options.

• Development Cluster: Used for development clusters only. Do not use this option for
production environments. The option provides the minimum resources needed for a
cluster.
• Production Cluster: Used for production environments. It provides a default
configuration for high-availability clusters.

4. Click Next.

5. In the Name and Environment section, do the following in the indicated fields:

• Kubernetes Cluster Name: Choose a name for your cluster. The name must start with a
letter or a number followed by up to 40 lowercase letters, numbers, or hyphens (cannot
end with a hyphen).
• Nutanix Cluster: Choose which Prism Element cluster to run the Kubernetes cluster on.
• Kubernetes Version: Choose from one of three Kubernetes versions.
• Host OS: Select the version of the downloaded node OS image (centos), see
Downloading Images on page 35.

Figure 3: Name and Environment Configuration Window

6. Click Next.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 15


7. In the Node Configuration tab, do the following in the indicated fields.

Caution: If you exceed available capacity on any underlying hardware when choosing
memory, CPU, and disk size, the Kubernetes cluster cannot deploy.

a. Configure Network Resources:

• Kubernetes Node Network: Select one of the available networks.


• (optional) Additional Node Network: To optimize network traffic, select another
network from the PE cluster.

Note: IPAM is available for production clusters. IPAM or DHCP is available for
development clusters.

Figure 4: Network and Node Resources


b. Configure Worker Resources:

• Number of Workers: Enter the number of Kubernetes worker nodes.


• (optional) Click Edit to customize worker resources:

• VCPU: Enter the number of vCPUs per worker node.


• Memory: Enter the memory capacity per worker node (recommended 8 GiB per
node).
• Size: Enter the hard disk size (recommended 120 GiB per worker node).
c. Configure Control Plane Resources:

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 16


• Select Control Plane Resource Configuration: Choose one of the following options
from the dropdown:

• 1 (suitable for non-production clusters only)


• Multi-Control Plane: Active-Passive
• Multi-Control Plane: External Load Balancer
• Control plane VIP IP Address (active-passive only): Enter a VIP for the Kubernetes
API server.

Caution: Using the same managed network and IP addresses as another multi-control
plane deployment results in deployment failure.

Note: IPAM provides IPs for the nodes. The VIP must be part of the same VLAN but
outside of the IP pool, see the "Network" section in Cluster Setup on page 12.

• Number of Control Planes: Enter the number of Kubernetes control plane nodes.

Note: single-control plane and active-passive configurations have a set number of


control plane nodes. Production deployments require at least two control plane nodes.

• (optional) Click Edit to customize worker resources:

• VCPU: Enter the number of vCPUs per node.


• Memory: Enter the memory capacity per node.
• Size: Enter the hard disk size (recommended 120 GiB per control plane node).
• (For external load balancer only) Enter IP addresses:

Note: The control plane IPs come from the non-DHCP pool of the AHV network. The
IP addresses auto populate but are modifiable.

• External Load Balancer IP: Enter the IP address for the external load balancer.
• Control Plane IP Addresses: Enter an IP address for each control plane node.

Note: If you use an external load balancer, it needs to be configured as an L4/TCP(443)


load balancer pointing to each control plane node.

d. Configure Etcd Resources:

• Number of etcd nodes: Select the number of etcd nodes.


• (optional) Click Edit to customize etcd resources:

• VCPU: Enter the number of vCPUs per node.


• Memory: Enter the memory capacity per node.
• Size: Enter the hard disk size (recommended 120 GiB per etcd node).

8. Click Next.
The Network page appears.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 17


9. To configure Network settings, do the following in the indicated fields:

• Network Provider: Select the network provider from the dropdown menu.
• Service CIDR Range: Enter an IP address range within your network range (RFC-1918) in
CIDR notation, or use the default values. The IP range exposes services.

Caution: The service CIDR and pod CIDR must not overlap with the Kubernetes node
network IP range or with each other. Default ranges can overlap.

• Pod CIDR Range: Enter an IP address range within your network range (RFC-1918) in
CIDR notation for pod-to-pod communication. You can also use the default values.
Kubernetes assigns pods in the cluster an IP address from this range.

Note: Kubernetes configuration files refer to the pod CIDR as a "cluster IP".

Figure 5: Network Configuration Window

10. Click Next.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 18


11. To configure Storage Class settings, do the following in the indicated fields:

Note: You can create more storage classes once you create the cluster, see Creating a
Storage Class on page 36.

Note: During cluster deployment, Karbon uses Nutanix Volumes storage for the storage
class. After deployment, you can create a Nutanix Files storage class for the cluster.

• Storage Class Name: Enter the name for the storage class. The name must start with a
letter or a number. Only use lowercase alphanumeric characters, hyphens, and periods
(maximum 253 characters).
• Nutanix Cluster: Select the target cluster for allocating storage for stateful pods.
• Storage Container Name: Select the storage container for storage.
• Reclaim Policy: The reclaim policy specifies what the cluster does with the volume once
it is not in use. Select Delete or Retain.
• File System: Select the file system for the storage class (xfs or ext4).
• Enable Flash Mode: Check this box to use SSDs for data storage and improved
performance.

Note: Karbon automatically provisions some storage for node and system pod logs based
on the size of the cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 19


Figure 6: Storage Class Configuration Window

12. Click Create.


The new cluster appears on the Karbon home page. A new task for the cluster deployment
displays the progress of the deployment in the Tasks tile on the Summary page.

Note: After the cluster configuration, the anti-affinity rules are automatically created for
control planes and worker nodes between VMs.

Creating a Cluster Using the Karbon API


Create a Kubernetes cluster using the karbonctl, the Karbon command-line utility.

About this task


Deploying a Kubernetes cluster requires creating a JSON payload that includes the following
specifications:

• Number of etcd nodes and etcd resource configuration

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 20


• Number or worker nodes and worker resource configuration
• Number of control plane nodes and control plane resource configuration
• Calico or flannel container network interface (CNI)
• Pod and service CIDR ranges for the CNI
• Kubernetes and node OS image versions
Cluster Setup on page 12 describes cluster deployment requirements and specifications in
detail.
Perform the following tasks to deploy a Kubernetes cluster.

Procedure

1. Go to the Nutanix Developer Portal and create a JSON file using the POST /karbon/v1/k8s/
clusters API code.

2. Update the attributes in a deployment JSON file and save the file. Refer to Cluster
Deployment Attributes on page 21 and Cluster Setup on page 12 for information on
cluster setup.

3. Run the following command to deploy the cluster using the deployment JSON.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster deploy --filepath deployment-JSON-
filepath

Cluster Deployment Attributes


A description of API attributes for deploying a Kubernetes cluster using the karbonctl.
The following table describes the API code attributes for Kubernetes cluster deployment using
the karbonctl. To deploy a cluster using the karbonctl, refer to Creating a Cluster Using the
Karbon API on page 20.

Tip: You can find the required cluster and network-specific information in Prism or use the
following commands.

• To list the Prism Element (PE) unique universal identifier (UUID), run the following
command from the PE cluster.
nutanix@CVM:~$ ncli cluster info |grep "Cluster Uuid"

• To list the unique universal identifier (UUID) of the network, run the following
command from the PE cluster.
nutanix@CVM:~$ acli net.list

• To list the name of the container, run the following command from the PE cluster.
nutanix@CVM:~$ ncli ctr list |grep Name|grep -v VStore

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 21


Table 1: Deployment File Attributes

Object / Attribute Description Value


Attribute Name Name

cni_config node_cidr_mask_size The mask size of the [Integer]


node CIDR (Classless
Inter-Domain Routing).
The CIDR prefix.

pod_ipv4_cidr An IP address range [An IP


within your network address
for pod-to-pod range in CIDR
communication. notation. For
example,
172.20.0.0/16.]

service_ipv4_cidr An IP address range [String


within your network in CIDR
for service exposure. notation. For
example,
172.20.0.0/16.]

etcd_config ahv_config cpu The number of vCPUs [Integer]


per node.

disk_mib The hard disk size [Integer]


(recommended 130 (GiB)
GiB per etcd node).

memory_mib The memory capacity [Integer]


per node. (MiB)

network_uuid The unique universal [String]


identifier (UUID) of the
network.

prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.

name The name of the etcd [String]


node pool.

node_os_version The version of the [String]


node OS image.
Note: All node types (etcd,
worker, and control plane)
must use the same node OS
version.

num_instances The number of etcd [Integer]


nodes.

control active_passive_config
external_ipv4_address The IP address of the [String]
planes_config external load balancer.

external_lb_config
external_ipv4_address The IP address of the [String]
external load balancer.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 22


Object / Attribute Description Value
Attribute Name Name

ipv4_address The IP addresses of [String],


the control plane
nodes. Enter an IP
address for each node.

node_pool_name The name of the [String]


control plane node
pool.

node_pools cpu The number of vCPUs [Integer]


per node.

disk_mib The hard disk size [Integer]


(GiB)

memory_mib The memory capacity [Integer]


per node. (MiB)

network_uuid The unique universal [String]


identifier (UUID) of the
network.

prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.

name The name of the [String]


control plane node
pool.

node_os_version The version of the [String]


node OS image.
Note:
All node
types (etcd,
worker,
and control
plane) must
use the same
node OS
version.

num_instances The number of control [Integer]


plane nodes.

metadata api_version The API version used v1.0.0


for clsuter deployment.

name The name of the [String]


Kubernetes cluster.

storage_class_config
default_storage_class The attribute specifies true, false
if the storage class
used for the cluster is
the default.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 23


Object / Attribute Description Value
Attribute Name Name

name The name of the [String]


storage class.

reclaim_policy The reclaim policy Delete


specifies what happens
to a volume that is no
longer in use.

volumes_config

file_system The file system for ext4, xfs


allocating storage.

flash_mode The flash tier provides true. false


high-performance
storage when flash
mode is enabled.

password Password for the PE [String]


cluster.

prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) for
the PE cluster.

storage_container The name of the [String]


Nutanix storage
container.

username The PE username. [String]

version The Kubernetes [Decimal]


version of the
deployment.

Note: Refer
to the Karbon
release notes
for supported
versions.

workers_config ahv_config cpu The number of vCPUs [Integer]


per node.

disk_mib The hard disk size. [Integer]


(GiB)

memory_mib The memory capacity [Integer]


per node. (MiB)

network_uuid The unique universal [String]


identifier (UUID) of the
network.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 24


Object / Attribute Description Value
Attribute Name Name

prism_element_cluster_uuid
The unique universal [String]
identifier (UUID) of the
Prism Element (PE)
cluster.

name name The name of the [String]


worker node pool.

node_os_version The version of the [String]


node OS image.
Note: All node types (etcd,
worker, and control plane)
must use the same node OS
version.

num_instances The number of worker [Integer]


nodes.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Setup | 25


6
KARBON LAYOUT
The Karbon home page consists of the Clusters and the OS Images tabs. The Clusters tab is
the main landing page and displays a list of all Karbon clusters. The OS Images tab displays
available CentOS images and the download image option.

• Clusters on page 26
• OS Images on page 32

Clusters
The Clusters tab includes the following sections and options:

• Create a cluster with the Create Kubernetes Cluster button.


• An Actions drop-down with the following options:

• Download the Kubernetes configuration file to your client by clicking Download


Kubeconfig. The kubeconfig lets you run kubectl commands against the Kubernetes
cluster, see Downloading the Kubeconfig on page 34.
• The SSH Access option lets you access nodes in a Kubernetes cluster using an ephemeral
certificate, which expires after 24-hours. See Accessing Locked Nodes on page 43.
• Use the Upgrade Node OS Image option to upgrade the Karbon node OS image for the
cluster. See Upgrading a Node OS Image on page 52.
• Use the Upgrade Kubernetes Option option to upgrade the Kubernetes version of the
cluster. See Upgrading Kubernetes Using the Karbonctl on page 53.
• Use the Delete Cluster to delete the selected cluster.

Figure 7: Karbon Clusters Table

The home page also includes a table with the following details of every cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 26


Table 2: Clusters Table

Parameter Description
Name The name of the cluster.
OS Image The node OS image version used for the
cluster.
Nodes The number of worker nodes on the cluster.
Status The status field describes the health of the
cluster, specifies the status of operations,
and notes when there are available upgrades
(Deploying, Healthy, Upgrading, Failed).
Version Kubernetes version.

Clicking any name of a cluster on the Clusters page takes you to the Summary page of
the cluster. The menu pane of the Summary page includes the following tabs: Summary,
Alerts,Tasks, Storage Class, Volume, Add-On, and Nodes.

Summary
Clicking the name of a cluster on the Clusters page takes you to the Summary page. The
Summary page consists of multiple tiles that provide details on the clusters usage, alerts, tasks,
and nodes.

Properties, Alerts, and Tasks

Figure 8: Summary Page

Table 3: Properties Tile

Parameter Description
API Endpoint Control plane server endpoint.
Version Kubernetes version.

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 27


Parameter Description
Health The health status of the cluster.
Network Provider The network provider for the cluster.

Table 4: Alerts Tile

Parameter Description
Current Alerts Lists the number of critical, warning, and info
alerts for the last 24 hours
Icon An icon indicating the severity of the alert
appears next to a description of the issue.
View All Alerts Click the View All Alerts link to go to the
Alerts tab.

Table 5: Tasks Tile

Parameter Description
Task Lists various operations on the cluster.
Icon Indicates if the task completed successfully.
View All Tasks Click the View All Tasks link to go to the Tasks
tab.

Node Tiles
The node tiles display the following node details for each node type: etcd, control plane, and
worker.

Table 6: Node Tile Parameters

Parameter Description

Node(s) The quantity of the specified nodes in the


cluster.
vCPU The total number of vCPUs for the specified
node type.
Memory The amount of memory currently in use out of
the total configured memory.
Disk Space The amount of disk space currently in use out
of the total configured disk space.

Alerts
The Alerts tab consists of a table that lists and describes recent alerts for the cluster. The
Prometheus add-on powers Karbon alerts, see Monitoring on page 63.
The alerts table consists of the following parameters

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 28


Figure 9: Alerts Page

Table 7: Alerts Table

Parameter Description
Alert Name The name of the alert.
Severity The severity of the alert.
Message A description of the alert issue.
Time Created The date and time when the alert was created.
Status The current status for the alert.

Tasks
The Tasks page consists of a table that lists and describes recent tasks performed on the
cluster.
The tasks table consists of the following parameters.

Figure 10: Tasks Page

Table 8: Tasks Table

Parameter Description
Name The name of the task including the entity.
Percentage Complete The progress of the task displayed as a
percentage.
Status The status of the task.
Time Created The date and time of task creation.

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 29


Storage Class
The Storage Class tab includes Create Storage Class and Delete Storage Class action buttons,
see Creating a Storage Class on page 36. The Storage Class tab also includes a table with
details about each storage class, see table below for details.

Figure 11: Storage Class Tab

Table 9: Storage Class Tab Parameters

Parameter Description
Name The name of the storage class.
Volume Type The type of storage used for Persistent
Volumes.
Default Storage Class Specifies if the storage class is the default
(True or False).
Cluster The name of the underlying cluster.
Reclaim Policy The reclaim policy specifies what happens
to the volume once it is not use (delete or
retain).

Volume
The Volume tab includes Create Volume and Delete Volume action buttons, see Creating
a Volume on page 38. The Volume tab also includes a table with details about each
PersistentVolume (PV), see table below for details.

Figure 12: Volume Tab

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 30


Table 10: Table

Parameter Description
Claim Name Name used to attach storage to pods
(maximum 253 characters). The name must
start with a letter or a number. Only use
lowercase alphanumeric characters, hyphens,
and periods.
Namespace Selected Kubernetes namespace (available
options are displayed).
Storage Class Storage class used by the volume.
Size The size of the volume in GiB.
Status Persistent volume status (Pending, Available,
Bound, Release, Failed).
Access Mode ReadWriteOnce is the only available access
mode for Nutanix Volumes storage. Nutanix
Files storage supports both ReadWriteMany
and ReadWriteOnce access modes.

Add-On
The Add-on page lists the add-ons installed on the Kubernetes cluster, see Add-Ons on
page 62. Refer to the table below for a description of the Add-on tab parameters.

Figure 13: Add-ons Tab

Table 11: Add-On Table

Parameter Description
Name Name of the add-on.
State Status of the add-on
Size Disk space allocated to the add-on (GiB)
Version Describes the version of the add-on, and
includes a Launch add-on action link.

Nodes
The Nodes tab provides three subtabs for each type of node in the cluster: Control plane,
worker, and etcd. Each tab consists of a table with details about the indicated node type. The

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 31


Worker tab has a button to Add Node Pool and an Actions drop-down menu with options to
Resize, Update, or Delete the selected node pool.
Tabs for each node type provide a table with the following details.

Table 12: Control Plane and Etcd Node Table Parameters

Parameter Description

Name The name of the node.


IP address The IP address of the cluster.
CPU The number of vCPUs allocated for the node.
Memory The memory capacity for the node (GiB).
Storage The storage capacity for the node (GiB).

Table 13: Node Table Parameters

Parameter Description

Name The name of the node.


CPU The number of vCPUs allocated per node.
Memory The memory capacity per node (GiB).
Storage The storage capacity per node (GiB).
Network The primary and additional networks
configured for the node pool.
Label (worker nodes only) The key value pair labels of the node pool.
Nodes The number of worker nodes in the node pool.
Clicking the dropdown arrow displays name
and the IP address of each worker node.

OS Images
The OS Images tab provides an overview of available and downloaded node OS images on your
cluster. When a new version of an image is available, the tab includes options to download the
new OS image. For steps to download a new image, see Downloading Images on page 35.

Note: All supported OS images are provided by Karbon during and after deployment. Refer to
the Karbon Release Notes for information on image compatibility.

After downloading a new image from the OS images tab, upgrade cluster images through the
Clusters tab, see Upgrading a Node OS Image on page 52.
The OS Images tab includes a table that describes the available, downloaded, and deleted
images in your environment. Refer to the table below for details.

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 32


Table 14: OS Images Table

Parameter Description
Image Version The Linux distribution and image version.
Release Notes A link to the release notes for the indicated OS
image.
Size The size of the OS image (GiB).
Download Status The status of the image on your Karbon
deployment (Download, Downloading, and
Downloaded).

Nutanix Kubernetes Engine (formerly Karbon) | Karbon Layout | 33


7
CLUSTER ADMINISTRATION
Manage your Kubernetes clusters.
After deploying Karbon and creating Kubernetes clusters, manage nodes, images, user access,
and other aspects of your environment.

Note: After the Karbon cluster is deployed, there must be no manual modification of Kubernetes
components (such as etcd, kubelet, apiserver, and so on) or add-ons (such as DNS, Prometheus,
Elastic Search, and so on). The resource values for the logging and monitoring stack alone can be
modified only when required.

Downloading the Kubeconfig


Before you begin
Ensure that you have downloaded kubectl to the machine from which you manage the cluster.
Also, configure IAM on PC, see the Prism Central Supplement for details.

About this task


The kubeconfig is a configuration file for running kubectl commands against the deployed
Kubernetes cluster. The kubeconfig is signed for specific users rather than clusters. The
kubeconfig token expires after 24 hours.
To deploy applications on your cluster using kubectl, download the Kubernetes cluster
configuration file (Kubeconfig) to your host.

Procedure

1. In the Clusters view, select a cluster from the list by checking the adjacent box.

2. Click the Actions drop-down.

3. Click Download Kubeconfig.

Figure 14: Kubeconfig Button

4. Under Instructions, click Download.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 34


5. Run the kubeconfig commands by doing one of the following:

» Click Copy the command to clipboard and run the command to finish the download
process.
» Manually run the required commands (continue to step 6).

6. Replace the path to the prod1-kubectl.cfg file with the file path to the downloaded file as it
appears in your directory. Run the following command on your host.
$ export KUBECONFIG=/path/to/prod1-kubectl.cfg

You have set the kubeconfig environment variable.

7. To test the cluster, run the following commands.


$ kubectl cluster-info
$ kubectl get nodes
$ kubectl get pods --all-namespaces

Logging on to the Karbonctl


Use the indicated CLI command to log on to the karbonctl.

About this task


The karbonctl is the Karbon command-line utility that you can use to manage Karbon.

Procedure
Log on to karbonctl from a PC cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl login --pc-username username \
--pc-password password cc

Downloading Images
About this task
Deploying Kubernetes clusters in Karbon requires a CentOS image. You must choose from a
CentOS version and download the image.

Caution: You must use one of the images provided by Nutanix.

Note: The airgap package includes downloaded node OS images.

About this task


Follow these steps to download a node OS image.

Procedure

1. In Karbon, click the OS Images tab in the menu pane.

2. Click Download to start the download process.


The Download Status column will show the progress of the download.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 35


Creating a Storage Class
About this task
After you create your first cluster and your first storage class, you can create additional storage
classes through the Storage Class tab.

Note: Changes to the Cluster Virtual IP or admin username in Prism Element affect the storage
class configuration. For Cluster Virtual IP and username update assistance, contact Nutanix
Support.

Note: The first time you create a cluster, you also create the default storage class. Refer to the
Kubernetes documentation for directions on changing the default storage class.

Follow these steps to create a storage class.

Procedure

1. In the Clusters view, click the target cluster

2. In the menu pane, click Storage Class.

3. Click the Create Storage Class button.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 36


4. (For Nutanix Files go to step 5) For Nutanix Volumes storage, do the following in the
indicated fields:

• Volume Type: Select Nutanix Volumes.


• Storage Class Name: Enter the name for the storage class. The name must start with a
letter or a number. Only use lowercase alphanumeric characters, hyphens, and periods
(maximum 253 characters).
• Nutanix Cluster: Select the target cluster for allocating storage for stateful pods.
• Storage Container Name: Select the storage container to user for persistent volumes
(PVs).
• File System: Select the file system for the storage class (xfs or ext4).
• Reclaim Policy: The Reclaim Policy specifies what the cluster should do with the volume
once it is not in use. Select Delete or Retain.
• Enable Flash Mode: Check this box for improved performance. With Flash Mode, Karbon
uses only SSDs in the hot-tier for storage.

Note: Some storage is automatically provisioned for node and system pod logs based on the
size of the cluster.

a. Continue to step 6.

Figure 15: Nutanix Volumes - Storage Class

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 37


5. For Nutanix Files storage, do the following in the indicated fields:

• Volume Type: Select Nutanix Files.


• Storage Class Name: Enter the name for the storage class. The name must start with a
letter or a number. Only use lowercase alphanumeric characters, hyphens, and periods
(maximum 253 characters).
• NFS Export Endpoint: Enter the endpoint within the NFS export. The endpoint must be a
host name or an IP address.
• NFS Export Path: Enter the path to the NFS export endpoint.
• Reclaim Policy: The Reclaim Policy specifies what the cluster should do with the volume
once it is not in use. Select Delete or Retain.

Note: Some storage is automatically provisioned for node and system pod logs based on the
size of the cluster.

Figure 16: Nutanix Files - Storage Class

6. Click Create.

Deleting a Storage Class

About this task


Follow these steps to delete a storage class.

Procedure

1. In the Clusters view, click the target cluster.

2. In the menu pane, click Storage Class.

3. Check the box for the storage class you want to delete.

4. Click the Delete Storage Class button.

Creating a Volume
Create a Persistent Volume (PV) for your cluster.

About this task

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 38


Procedure

1. In the Clusters view, click the target cluster.

2. In the menu pane, click Volume.

3. Click Create Volume.

4. Do the following in the indicated fields;

a. Claim Name: Enter a name for the Persistent Volume Claim


b. Namespace: Select a namespace for the PVC.
c. Storage Class: Select a storage class.
d. Access Mode: Select an access mode from Read-Write-Once or Read-Only-Many.
e. Volume: Enter the size for the volume (GiB).

Figure 17: Create Volume Window

5. Click Create.

Deleting a Volume

About this task


Follow these steps to delete a volume.

Procedure

1. In the Clusters view, click the target cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 39


2. In the menu pane, click Volume.

3. Check the box for the volume you want to delete.

4. Click the Delete Volume button.

Creating a Node Pool


Create more worker node pools on the cluster.

About this task


A node pool is a set of nodes with the same configuration. Node pools help accommodate
varying resource and scaling needs of different workflows. Follow the steps as indicated to
create a node pool.

Procedure

1. In the Clusters view, click the target cluster.

2. In the menu pane, click Nodes > Worker.

3. Click + Add Node Pool.

4. Do the following in the indicated fields:

• Name: Enter a name for the node pool.

Note: Only use alphanumeric characters or the hyphen (-) special character.

• Number of Nodes: Enter an integer for the number of worker nodes.


• Node Properties

• CPU: Enter an integer for the amount of CPU allocated per node.
• Memory: Enter an integer for the amount of memory allocated per node (GiB).
• Storage: Enter an integer for the amount of storage allocated per node (GiB).
• Node Pool Network: Choose the primary network for the nodes. Use any network of
the Prism Element (PE) cluster that has connectivity with the Karbon cluster.
• (optional) Additional Network: The iSCSI network used for I/O optimization.
• Metadata
Use key and value pairs to add meaningful labels to the node pool.

• Key: Enter a descriptive label for the key (for example, environment).

Note: Only use alphanumeric characters or the hyphen (-) special character.

• Value: Enter a descriptive label for the value (for example, dev).
• Click +.
• (optional) To add more key value pairs, click +.

5. Click Add.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 40


Updating the Number of Worker Nodes
To resize a node pool, add or remove the number of worker nodes.

About this task


You can't modify memory, storage, and CPU details. Perform the following steps as indicated to
add or reduce the number of worker nodes in your cluster.

Procedure

1. In the Clusters view, click the target cluster.

2. In the menu pane, click Nodes > Worker.

3. Select the target node pool.

4. Click Actions > Resize.

Note:

• If you want to delete a worker node, select the option Actions > Delete. Note that
you cannot delete the default node pool.
• If the last worker node cannot be deleted from the Karbon UI, then that worker
node cannot be deleted from Prism Element as well.

5. Under Number of Nodes, indicate the desired number of worker nodes for the node pool.
For example, if you currently have three worker nodes in the node pool but you want to have
a total of five nodes, click + so that the Number of Nodes is 5.

Note: Reducing the number of nodes might delete the most recently added nodes.

Figure 18: Add Worker Window

6. Click Resize.

Updating Node Pool Metadata


Update the metadata of a node pool.

About this task


Follow the steps as indicated.

Procedure

1. In the Clusters view, click the target cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 41


2. In the menu pane, click Nodes > Worker.

3. Select the target node pool.

4. Click Actions > Update.

5. Under Metadata, do the following.


Use key and value pairs to add meaningful labels to the node pool.

• Key: Enter a descriptive label for the key (for example, environment).

Note: Only use alphanumeric characters or the hyphen (-) special character.

• Value: Enter a descriptive label for the value (for example, dev).
• Click +.
• (optional) To add more key value pairs, click +.
• (optional) To delete a key value pair, click the delete label icon.

6. Click Update.

Configuring GPU Support


Configure pass-through GPU support on an existing node pool.

Before you begin


Meet the following requirements:

• Install the ntnx-1.2 node OS image or higher on the cluster.


• Allocate a minimum of 9 GiB of memory for the worker node pool to support installation of
the NVIDIA GPU operator.
• Ensure that the host has supported GPU hardware. In Prism Central, go to Entities Menu >
Hardware > GPUs.
• Check that AHV supports the GPU. See "Supported GPUs" in the AHV Administration Guide.
Nutanix recommends reviewing the NVIDIA GPU compatibility and support.

Note: GPU enablement requires installation of NVIDIA datacenter driver software governed by
NVIDIA licensing terms.

About this task


Follow the steps as indicated.

Procedure

1. List the pass-through GPU configurations associated with the Prism Element (PE) cluster of
the Karbon cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster gpu-inventory list –cluster-name name-
of-karbon-cluster

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 42


2. Add a GPU configuration type to a node pool.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster node-pool add --cluster-name name-of-
cluster
--node-pool-name name-of-node-pool --count number-of-nodes --gpu-count gpus-per-node
--gpu-name gpu-config-type --memory size-GiB

Add additional parameters as needed.

• Specify the number of CPUs (8 by default): --cpu number-of-CPUs


• Specify the size of the GPU hard disk (120 GiB by default): --disk-size size-GiB
• Add a label by specifying a key-value pair (for example, --labels environment=dev): --labels
key=value

• Specify the name or the universally unique identifier (UUID) of the VLAN: --vlan-name name
or --vlan-uuid UUID

What to do next
Install the GPU operator. Go to nutanix.dev for guidance.

Access and Authentication


Prism Central (PC) administrators with the "User Admin" role have full access to Karbon and its
functionalities. Performing most karbonctl operation requires user admin privileges. PC admin
that do not have the "User Admin" role (Cluster Admin and Viewer) can only access Karbon
to download the kubeconfig and cannot perform any other administrative tasks. See "User
Management" in the Prism Web Console Guide for steps on assigning roles.
Nutanix requires configuring Karbon users to a directory service in Prism. See Security
Management in the Prism Web Console Guide for directions on configuring a directory service.
After setting up and testing your cluster, configure role-based access control (RBAC), see
Kubernetes documentation for reference.
To access a node in the cluster, you must receive an ephemeral certificate, see Accessing
Locked Nodes on page 43.

Accessing Locked Nodes

About this task


Karbon protects all nodes in a cluster. You can access nodes in a Kubernetes cluster using
an ephemeral certificate, which expires after 24-hours. Perform the following steps to get a
certificate.

Procedure

1. In the Clusters view, select the target cluster.

2. Click the SSH Access button.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 43


3. In the Node SSH Access window, click Download to download and save the SSH access
script to your client.

Figure 19: Node SSH Access

4. Run the following command. sh <cluster_name>-ssh-access.sh

5. When prompted, enter the IP of any node in the cluster to get access to all nodes.
Karbon grants the user a private key.

6. Log on to the target node as a Nutanix user using the command line.

Rotating Certificates
Update certificates for cluster services and add-ons.

About this task


Cluster certificates expire after two years. Once a certificate expires, the cluster becomes
unhealthy, which can lead to unsuccessful operations on the cluster. Triggering a certificate
rotation on a cluster restarts add-ons and node services, including the kubelet and the API
server.
To trigger certificate rotation, follow the steps as indicated.

Procedure
Initiate certificate rotation.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster certificates rotate-cert --cluster-
name k8-cluster-name

Migrating DVP and CSI to Use Certificate-Based Authentication


Migrate DVP and CSI to use certificate-based authentication.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 44


Before you begin
Ensure that the cluster is healthy.

About this task


To migrate DVP and CSI from using username/password-based authentication to using
certificate-based authentication, certificate rotation should be initiated.
To trigger certificate rotation, follow the steps as indicated.

Procedure
Initiate certificate rotation.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster certificates rotate-cert --cluster-
name k8-cluster-name

Restarting the Karbon Service


Restart the Karbon service on a cluster.

About this task


Run the following commands from the Karbon cluster.

Procedure

1. Stop the karbon_core service on all the PCVMs.


nutanix@pcvm$ allssh "genesis stop karbon_core"

2. Start the karbon_core service.


nutanix@pcvm$ cluster start

Backing up Etcd
Back up your Karbon Kubernetes cluster etcd data regularly.

About this task

Important: The following procedure does not back up the application data.

Follow the steps as indicated to back up etcd data only.

Procedure

1. Collect the IP addresses of etcd nodes.

a. In Karbon, select the target cluster, expand Nodes in the sidebar, and click etcd.
b. Note the IP addresses of each etcd node.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 45


2. Back up the etcd data from only one node.

a. Securely log on to a single etcd node using SSH (see Accessing Locked Nodes on
page 43).
b. Use sudo to become the root user.
$ sudo su

c. Back up the etcd data.


# export ETCD_IP_0=<replace with etcd 0 IP address>
# export ETCD_IP_1=<replace with etcd 1 IP address, only production clusters>
# export ETCD_IP_2=<replace with etcd 2 IP address, only production clusters>
# export ETCDCTL_API=3
# export ETCDCTL_CACERT=/var/nutanix/etc/etcd/ssl/ca.pem;export \
ETCDCTL_CERT=/var/nutanix/etc/etcd/ssl/peer.pem;export \
ETCDCTL_KEY=/var/nutanix/etc/etcd/ssl/peer-key.pem
# export ETCDCTL_ENDPOINTS=\
"https://$ETCD_IP_0:2379,https://$ETCD_IP_1:2379,https://$ETCD_IP_2:2379"
# etcdctl --endpoints https://$ETCD_IP_0:2379 snapshot save /root/snapshot.db

d. Copy /root/snapshot.db to a safe location. For example, copy the file using SFTP (Secure
File Transfer Protocol) or SCP (Secure Copy Protocol).

Stopping a Karbon Cluster


Gracefully stop a Karbon cluster.

Before you begin

• Stop the apps and pods in the Karbon cluster. This helps reduce the possibility of data
corruption.
• Take a backup of the Kubernetes cluster etcd data (see Backing up Etcd on page 45).

About this task


Follow the steps as indicated to shut down a Karbon cluster.

Procedure

1. Collect the cluster name and IP address of one of the control plane nodes.

a. In Karbon, select the target cluster, expand Nodes, and click Control Plane.
b. Note the IP address of one control plane node.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 46


2. Use SSH to log on to one of the control plane nodes (see Accessing Locked Nodes on
page 43) for shutting down the Kubernetes pods.

a. Cordon the worker nodes to prevent the scheduler from placing new pods onto the
nodes.
$ kubectl cordon -l 'kubernetes.io/role=node'

b. Drain the worker nodes to gracefully shutdown the pods.

Note: This command ignores any PodDisruptionBudget.

$ kubectl drain -l 'kubernetes.io/role=node' --disable-eviction --ignore-daemonsets --


delete-emptydir-data --force

c. Wait for the previous command to complete its execution.

3. Use Prism Central UI to select the worker nodes.

a. In Prism Central, click the hamburger menu, expand Compute & Storage, and then click
VMs.
b. Click Filters, select the Name checkbox, change the option to Starts with, and type the
following:
karbon-kubernetes_cluster_name-
Where, kubernetes_cluster_name- is the name of the Karbon cluster.

a. Ensure that the filter is only showing the desired Kubernetes cluster nodes.
b. Select the virtual machines with the text worker in their name.

4. Soft shutdown the worker nodes.

a. With the worker nodes selected, click Actions and then click Soft Shutdown.
b. Click OK to confirm.

5. Once the worker nodes are powered off, perform step 3 for control plane nodes.

a. Unselect the worker nodes.


b. Select the virtual machines with the text master in their name.
c. Soft shutdown the control plane (see step 4).

6. Once the control plane nodes are powered off, perform step 3 for etcd nodes.

a. Unselect control plane nodes.


b. Select a single etcd node.
c. Soft shutdown the etcd node (see step 4).
Repeat the steps for each etcd node on the cluster once the previous node has been
powered off.

Starting a Karbon Cluster


Starting a Karbon cluster after shutdown.

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 47


About this task
Follow the steps as indicated to start a Karbon cluster.

Procedure

1. Use Prism Central UI to select the etcd nodes.

a. In Prism Central, click the hamburger menu, expand Compute & Storage, and then click
VMs.
b. Click Filters, select the Name checkbox, change the option to Starts with, and type the
following:
karbon-kubernetes_cluster_name-
Where, kubernetes_cluster_name- is the name of the Karbon cluster.

a. Ensure that the filter is only shows the desired Kubernetes cluster nodes.
b. Select the virtual machines with the text etcd in their name.

2. Start all etcd nodes.

a. With the etcd nodes selected, click Actions followed by Power On.
b. Click OK to confirm.

3. Collect the IP addresses of etcd nodes.

a. In Karbon, select the target cluster, expand Nodes in the sidebar, and click etcd.
b. Note the IP addresses for each etcd node.

4. Securely log on to an etcd node using SSH, see Accessing Locked Nodes on page 43.

5. Use sudo to become the root user.


$ sudo su

6. Verify that services have started on all etcd nodes.


# export ETCD_IP_0=<replace with etcd 0 IP address>
# export ETCD_IP_1=<replace with etcd 1 IP address, only production clusters>
# export ETCD_IP_2=<replace with etcd 2 IP address, only production clusters>
# export ETCDCTL_API=3
# export ETCDCTL_CACERT=/var/nutanix/etc/etcd/ssl/ca.pem;export \
ETCDCTL_CERT=/var/nutanix/etc/etcd/ssl/peer.pem;export \
ETCDCTL_KEY=/var/nutanix/etc/etcd/ssl/peer-key.pem
# export ETCDCTL_ENDPOINTS=\

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 48


"https://$ETCD_IP_0:2379,https://$ETCD_IP_1:2379,https://$ETCD_IP_2:2379"
# etcdctl -w table endpoint status

The output generated should be similar to the following image, displaying three etcd nodes
with one of them as the leader.

Figure 20: Starting a Karbon Cluster: verifying services on all etcd nodes

7. If there are any missing etcd nodes, ensure the following:

a. The nodes are up.


b. The etcd service has started.
c. There is network connectivity.

8. Start all control plane nodes.

a. Perform step 1 for control plane nodes.

• Unselect etcd nodes.


• Select the virtual machines with the text master in their name.
b. Power on the control plane (see step 2).

9. Use SSH to log on to one of the control plane nodes (see Accessing Locked Nodes on
page 43) to check the control plane status.

a. To verify that the control plane nodes are up, use the kubectl to check for the Ready
status.
$ watch kubectl get nodes

b. If the control plane nodes are NotReady, then check whether there are connection issues
between the control plane nodes and etcd nodes.

10. Start all the worker nodes.

a. Perform step 1 for worker nodes.

• Unselect the control plane nodes.


• Select the virtual machines with the text worker in their name.
b. Power on the workers (see step 2).

Nutanix Kubernetes Engine (formerly Karbon) | Cluster Administration | 49


11. Use SSH to log on to one of the control plane nodes (see Accessing Locked Nodes on
page 43) to check the cluster status and uncordon the worker nodes.

a. To verify that all worker nodes are up, use the kubectl to check for the Ready,
SchedulingDisabled status.
$ watch kubectl get nodes

b. Uncordon all the worker nodes to start the pods.


$ kubectl uncordon -l 'kubernetes.io/role=node'

c. To verify that all pods are up, use the kubectl to check for the Running status.
$ watch kubectl get pods --all-namespaces

12. In the Karbon UI, verify that the cluster status is healthy.
8
UPGRADES
There are two different types of Karbon upgrades:

• Karbon version upgrades using the Life Cycle Management feature, see Karbon Upgrades on
page 51.
• Node OS image upgrade, see Upgrading a Node OS Image on page 52.
Perform LCM upgrades through Prism Central (PC). Karbon is part of the PC upgrades module
in LCM. LCM upgrades the following Karbon components:

• Karbon version
• Karbon UI
Perform node OS image upgrades through Karbon. When a node OS image upgrade is
available, Karbon displays an option to download the new image in the OS Images tab, see OS
Images on page 32. Karbon also displays an Upgrade Available icon next to the cluster in the
Clusters view, see Upgrading a Node OS Image on page 52.

Karbon Upgrades
To check the current version of Karbon or to upgrade to later versions, perform the inventory
check in Prism Central using LCM.
For steps on performing inventory and upgrades in LCM, refer to the Life Cycle Manager Guide.
Ensure that you are running a compatible version of Prism Central (PC), Prism Element, and
AOS, see the Karbon Release Notes for compatibility details.

Note:

• After enabling PC-scale out, upgrade all PC nodes to a compatible version of


Karbon.
• Before upgrading to Karbon 2.4, ensure that all the deployed clusters are running
Kubernetes v1.18.x or a newer Kubernetes version. This is the minimum supported
Kubernetes version in Karbon 2.4. LCM upgrades will fail if the Kubernetes version in
the clusters is older than this minimum supported version.

Technical Preview Versions of Karbon


Perform the following tasks to upgrade from a technical preview to a general availability (GA).

• Upgrade to a compatible version of Prism Central using LCM in Prism Central, see the most
recent version of the Life Cycle Management Guide.

Nutanix Kubernetes Engine (formerly Karbon) | Upgrades | 51


• Delete any clusters created during technical preview.

Note: You cannot upgrade to a GA version of Karbon without first deleting clusters created
during technical preview. An error message shows up

• Perform inventory in LCM and update to a GA version of Karbon.

First Time Karbon Users


Perform the following tasks to install Karbon in your environment.

• Perform an inventory and upgrade to a compatible version of PC using the Life Cycle
Management (LCM) feature in PC see the Life Cycle Management Guide.
• Enable Karbon, see Enabling Karbon on page 7 .
• In LCM, perform inventory and update Karbon to an available GA version.

Upgrading a Node OS Image


Upgrade the node OS images for your cluster.

About this task

Caution:

• Upgrading a node OS image clears the contents of the /dev/sda boot disk. Ensure
that the /dev/sda disk does not contain any persistent content or files.
• Avoid using local storage pods. Upgrading a node OS image deletes the data in the
local storage.

Note: Karbon supports legacy images on existing clusters. Existing Kubernetes clusters do not
require an image upgrade.

Procedure

1. In Karbon, go to the Clusters view.

2. Select a cluster by checking the box next to the cluster name.

Note: Clusters that have an image eligible for an upgrade display the Upgrade Available icon.

3. Click the Actions button.

4. Click Upgrade Node Image.

» Download the target image.


» Upgrade using previously downloaded image (continue to step 6).

5. In the Upgrade Host Image OS window, click Download to download the target image. Wait
for the image to download.

6. Select the target image, and click Upgrade.


The status of the upgrade displays under Status on the Clusters page. More details about the
upgrade display on the Tasks tile.

Nutanix Kubernetes Engine (formerly Karbon) | Upgrades | 52


Upgrading Kubernetes
Upgrade the Kubernetes version of your cluster.

About this task


Perform the following steps as indicated.

Caution: Avoid using local storage pods. Upgrading Kubernetes deletes the data in the local
storage.

Procedure

1. In Karbon, go to the Clusters view.

2. Select a cluster by checking the box next to the cluster name.

Note: Clusters that have a Kubernetes version eligible for an upgrade display the Upgrade
Available icon in the table.

3. Click the Actions button.

4. Under List of Available Kubernetes Version for Upgrade, select the target Kubernetes
version.

Figure 21: Upgrading Kubernetes: versions available for upgrade

5. Click the dropdown arrow next to Upgrade.

a. To check the health of nodes and underlying components, click Precheck.


b. To upgrade to the selected Kubernetes version, click Upgrade.
The upgrade task initiates. You can monitor the upgrade process in the Tasks view.

Upgrading Kubernetes Using the Karbonctl


Upgrade the Kubernetes version of your cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Upgrades | 53


About this task
You can use the karbonctl to manually perform Kubernetes upgrades instead of using the
Karbon UI, see Logging on to the Karbonctl on page 35 for steps to log on to the karbonctl.

Procedure

1. List Kubernetes versions and names of existing clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Get the latest add-on versions.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl k8s get-from-portal

3. Get the list of compatible upgrade paths.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster k8s get-compatible-versions --cluster-
name cluster name

4. Upgrade the cluster to the specified Kubernetes version.

Note: The package consists of Kubernetes and add-on versions. Currently, Karbon only
supports the Kubernetes version.

nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster k8s upgrade --cluster-name cluster name


\
--package-version package version

What to do next
Check the upgrade status in the Tasks view, or using karbonctl.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster k8s upgrade status --cluster-name cluster
name

Upgrading the Karbon Airgap


Upgrade the Karbon aigrap using the Life-Cycle Manager (LCM).

About this task

Before you begin


You need a local web server reachable by your Nutanix clusters to host the LCM repository.

Note: Karbon airgap upgrades using Prism Central (PC) 5.16.1.2. cause an error. Upgrade PC
5.16.1.2. before attempting to upgrade the airgap.

Procedure

1. From a device that has public Internet access, go to Nutanix Portal > Downloads > LCM.

a. Next to LCM Dark Site Bundle (version), click Download to download the
lcm_dark_site_bundle_version.tar.gz. file.

b. Transfer lcm_dark_site_bundle_version.tar.gz. to your local web server and untar into


the release directory.

Nutanix Kubernetes Engine (formerly Karbon) | Upgrades | 54


2. From a device that has public Internet access, go to the Nutanix portal and navigate to
Downloads > Karbon.

a. Next to LCM Darksite bundle for Karbon , click Download to download the lcm-
darksite_karbon-builds_version-number.tar.gz file.

b. Transfer lcm-darksite_karbon-builds_version-number.tar.gz to your local web server and


untar into the release directory.

3. Log on to Prism Central.

4. Click Home > LCM > Settings.

a. In the Fetch updates from field, enter the path to the directory where you extracted the
tar file on your local server. Use the format http://webserver_IP_address/release.

b. Click Save.
You return to the Life Cycle Manager.
c. In the LCM sidebar, click Inventory > Perform Inventory.
d. Update the LCM framework before trying to update any other component.
The LCM sidebar shows the LCM framework with the same version as the file you
downloaded.

Updating OS Images and Kubernetes for Airgap


To upgrade OS images or the Kubernetes version on the airgap, upload Karbon Airgap bundle
and manifest files to a web server.

Before you begin

• Review Karbon Release Notes for image compatibility details.


• Create directory ntnx-version-number on your local web server.

Note: As a best practice, use the full version number (for example, 2.0.0 or 2.0.1).

About this task

Procedure

1. From a device that has public Internet access, go to Support Portal > collapse menu icon >
Downloads > Karbon.

2. Under Download Karbon Airgap bundle and manifest files, do the following.

a. Click Karbon Airgap bundle to download airgap-ntnx-version-number.tar.gz.

b. Click Airgap Manifest to download airgap-manifest.json.

Tip: Verify that the Karbon version in the airgap-manifest.json is for a version that
includes new images.

3. Transfer airgap-ntnx-version-number.tar.gz and airgap-manifest.json files to a local web


server.

a. Transfer and untar the files in the ntnx-version-number directory.

Nutanix Kubernetes Engine (formerly Karbon) | Upgrades | 55


4. Log on to karbonctl, see Logging on to the Karbonctl on page 35.

5. Upload the new images to the airgap. Replace airgap-UUID with the universally unique
identifier for the airgap. Replace webserver-directory-URL with the URL for the directory you
transferred files to in step 3.

nutanix@pcvm$ /home/nutanix/karbon/karbonctl airgap package-upload --airgap-uuid airgap-UUID


\
--webserver-url webserver-directory-URL

6. Follow steps in Upgrading a Node OS Image on page 52 to upgrade an OS image for a


cluster. To upgrade the Kubernetes version, follow steps in Upgrading Kubernetes Using the
Karbonctl on page 53.
9
OPTIONS
Karbon provides multiple options to further customize your Kubernetes implementation.

Enabling Alert Forwarding


About this task
This procedure describes the steps for enabling SMTP-based alert forwarding to an e-mail
address.

Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Replace the variables in the command as indicated.

Note: If you have enabled transport-level security (TLS), specify the ca-cert-path, client-
cert-path, and key-path variables.

/home/nutanix/karbon/karbonctl cluster alerts enable-smtp --cluster-name="cluster-name" --


from-email-address="from-email-address" \
--host="host-address" --to-email-address="to-email-address" --port=port-number --smtp-
username="smtp-username" \
--smtp-passwd="smtp-password" --tls

• Replace cluster-name with the name of the Kubernetes cluster.


• Replace from-email-address with the email address to be used as Source.
• Replace to-email-address with the email address to be used as Source.
• Replace host-address with the IP address or DNS name of the SMTP server.
• Replace port-number with the Port of the SMTP server.
• (Optional) Replace smtp-username with the username for SMTP authentication.
• (Optional) Replace smtp-password with the password for SMTP authentication.
• (Optional) Use --tls for TLS/STARTTLS.

Disabling Alert Forwarding

About this task


This procedure describes the steps for disabling SMTP-based alert forwarding.

Nutanix Kubernetes Engine (formerly Karbon) | Options | 57


Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Disable alert forwarding. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster alerts disable-smtp --cluster-name="cluster-name"

Disabling Infra Logging


About this task
This procedure describes the steps for disabling infra logging stack (Elasticsearch and Kibana)
on a Kubernetes cluster for system namespace.

Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Disable infra logging. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster infra-logging disable --cluster-name="cluster-name"

Enabling Infra Logging

About this task


This procedure describes the steps for enabling infra logging stack (Elasticsearch and Kibana)
on a Kubernetes cluster for system namespace.

Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Enable infra logging. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster infra-logging enable --cluster-name="cluster-name"

Configuring a Private Registry


Configure a private registry service to your Kubernetes cluster.

About this task


By default, Karbon does not add additional container image registries to Kubernetes clusters.
To use your own images for container deployment, add a private registry to Karbon and
configure private registry access for the intended Kubernetes clusters.
Follow the steps as indicated to configure a private registry for your cluster.

Nutanix Kubernetes Engine (formerly Karbon) | Options | 58


Procedure

1. Add the private registry to Karbon.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl registry add --name registry-name \
--url registry-url [--port registry-port] [--username username --password password] \
[--cert-file cert-filepath]

Replace the arguments as indicated.

• Replace registry-name with name of the registry.


• Replace registry-url with the URL of the registry.
• If you are not using port 443, replace registry-port with the port number to your private
registry.
• To apply username and password authentication to the registry, replace username and
password with the desired authentication credentials.

Note: If you want to add user authentication after registry creation, delete the registry and
create a new one with the desired authentication.

• If the registry is certificate-based, replace cert-filepath with the file path to the certificate.

Note: Omit the cert-filepath parameter for HTTP-based registries. The Docker
configuration supports insecure registries.

Note: To configure a private registry that use token authentication instead of certificates,
please contact Nutanix support.

2. Check for custom registries known to Karbon.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl registry list

3. Add the private registry to a Kubernetes cluster. Replace cluster-name with the name of the
Kubernetes cluster. Replace registry-name with the name of the registry (as in step 1).
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry add --cluster-name \
cluster-name --registry-name registry-name

4. Confirm that the Karbon and the Kubernetes clusters have access to the custom registry.
Replace cluster-name with the name of the Kubernetes cluster.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry list --cluster-name cluster-
name

Deleting a Private Registry


Delete a private registry from your Karbon environment.

About this task


Delete access to a private registry before removing the registry from Karbon. Follow the steps
as indicated:

Nutanix Kubernetes Engine (formerly Karbon) | Options | 59


Procedure

1. Delete access to a private registry from your Kubenetes cluster. Replace cluster-name with
the name of the Kubernetes cluster, and replace the registry-name with the name of the
target registry.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster registry delete --cluster-name cluster-
name \
--registry-name registry-name

2. Note: Before deleting the registry, revoke registry access from all clusters.

Delete a private registry from Karbon. Replace the registry-name with the name of the target
registry.
nutanix@pcvm$ /home/nutanix/karbon/karbonctl registry delete --registry-name registry-name

Enabling Log Forwarding


Steps to enable log forwarding to an external endpoint (Syslog or Elasticsearch).

About this task

Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Replace the variables in the command as indicated.

Note: If you have enabled transport-level security (TLS), specify the ca-cert-path, client-
cert-path, and key-path variables.

/home/nutanix/karbon/karbonctl cluster log-forward enable --cluster-name="cluster-name" --


endpoint_type endpoint-type \
--host="host-IP-address" --port=port-number --http_username="http-username" \
--http_passwd="http-password" --ca_cert ca-cert-path --cert client-cert-path \
--key key-path

• Replace cluster-name with the name of the Kubernetes cluster.


• Replace endpoint-type with type of external logging endpoint, this can be elasticsearch
or syslog.

• Replace host-IP-address with the name of the external endpoint.


• Replace port-number with the external endpoint port.
• (Optional) Replace http-username with the HTTP username for Elasticsearch
authentication.
• (Optional) Replace http-password with the HTTP password for Elasticsearch
authentication.
• (Optional) Replace ca-cert-path with the absolute path to the certificate authority (CA)
file. Use the privacy enhanced mail (PEM) format.
• (Optional) Replace client-cert-path with the absolute path to the client certificate signed
by the CA. Use the PEM format.

Nutanix Kubernetes Engine (formerly Karbon) | Options | 60


• (Optional) Replace key-path with the absolute path to the client key.

Disabling Log Forwarding


Disable log forwarding to an external endpoint.

About this task

Procedure

1. List the names of Kubernetes clusters.


nutanix@pcvm$ /home/nutanix/karbon/karbonctl cluster list

2. Disable log forwarding. Replace the cluster-name with the name of the target cluster.
/home/nutanix/karbon/karbonctl cluster log-forward disable --cluster-name=cluster-name

Network Segmentation
Segregate Nutanix Volumes iSCSI traffic from other traffic.

Important: Before using network segmentation with Karbon, enable network segmentation for
Nutanix Volumes, see "Service-Specific Traffic Isolation" in the AOS Security Guide.

Configure network segmentation on a cluster by specifying the segmented network as an


additional network during cluster creation, see Creating a Cluster on page 14. Optionally,
configure a segmented network as an additional network by adding a worker node pool to an
existing cluster, see Creating a Node Pool on page 40.

Important: Karbon only supports using network segmentation for container workloads
with Nutanix Volumes when specifically configured as part of a new storage class. See the
isSegmentedIscsiNetwork parameter in Creating a Storage Class (Nutanix Volumes) topic in
CSI Volume Driver 2.5. Network segmentation is not enabled for communication to volumes
supporting etcd and other default services. Configuring network segmentation on a cluster that
uses both Nutanix Files and Nutanix Volumes storage provides a dedicated network to Nutanix
Volumes traffic only.

Nutanix Kubernetes Engine (formerly Karbon) | Options | 61


10
ADD-ONS
Karbon add-ons are open source software extensions that provide additional features to your
deployment. The add-ons are automatically installed when you enable Karbon.
Karbon includes the following add-ons:

• A logging add-on powered by Kibana, see Logging on page 62


• A monitoring add-on powered by Prometheus, see Monitoring on page 63

Logging
The Kibana data-visualization plugin is the Karbon logging add-on.
The Kibana dashboard has a custom tab for the LogTrail plugin (not available on Kubernetes
version 1.20 and above), which displays data for the selected namespaces. By default, LogTrail
is configured to display logs for the system namespaces of the Kubernetes cluster: kube-system
and ntnx-logging.
Access the add-on through the Karbon UI, as access to pods is restricted.

Note: Do not delete or modify the supporting namespaces.

Figure 22: Kibana Add-On

Settings
The Settings filter displays logs for the selected entity. Kibana displays the hostname in orange
and the pod name in blue. Click the colored text to filter by pod or hostname.
You can select the following logging options from the Settings tab:

Nutanix Kubernetes Engine (formerly Karbon) | Add-Ons | 62


Table 15: Logging Settings

Setting Description
kubernetes-* (default) Displays logs for pods running in the kube-
system and ntnx-logging namespaces.
systemd-* Displays logs for the kubelet control plane and
worker services of every node.
etcd-* Displays logs from etcd services running on
etcd VMs.

All Systems
By default, the All Systems tab displays logs from all nodes. You can also use it to filter the
display to only show logs for specific nodes.

Monitoring
The built-in Prometheus add-on provides monitoring for Kubernetes clusters. Prometheus
scans clusters for health and consumption, provides data for metrics, and triggers alerts and
notifications that appear in the Karbon Console.
Prometheus feeds data to the alerts tab in the Karbon user interface (UI).

Note: Do not delete or modify the supporting namespaces.

Nutanix Kubernetes Engine (formerly Karbon) | Add-Ons | 63

You might also like