vmware admin doc

You might also like

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 4

/etc/init.

d/hostd restart
/etc/init.d/vpxa restart

/opt/likewise/bin/domainjoin-cli query (command to check if vc in domain)

esxcli software vib install -d /vmfs/volumes/GLDEWE-CP280003-SP1-C1-PZ/VMware-ESXi-


7.0.3c-19193900-LNV-20220119.zip
esxcli software vib update -d /vmfs/volumes/GLCHST-CP280001-SP1-C1/VMware-ESXi-
7.0U2c-Patch.zip

type dcui after loging into host via root to get direct console

HA rebooted vm command
Get-VIEvent -MaxSamples 100000 -Start (Get-Date).AddDays(-1) -Type Warning | Where
{$_.FullFormattedMessage -match "restarted"} |select
CreatedTime,FullFormattedMessage | sort CreatedTime –Descending
--------------------------------------------------------------------
-----------
command for CRC error

esxcli network nic stats get -n vmnic1 | egrep "Total receive errors|Receive CRC
errors|Receive missed errors"

VC not accessible command

services.sh restart

curl -v telnet://target ip address:port (Test port connectivity from the VCSA)

service-control --list
service-control --status
service-control --stop vsphere-client
service-control --start vsphere-client
service-control --stop --all
service-control --start --all
service-control --status vsphere-client

-----------------------------------------------------

to connect vc from power shell


-------------------------------------------
connect-viserver glusca-sp220001.novartis.net
get-vm4

command to create memory dump (need to copy vmss2 core in same diretory as vmsd and
vmem file
vmss2core-sb-8456865.exe -W8 GLDERU-SP410003-Snapshot1.vmsn GLDERU-SP410003-
Snapshot1.vmem

to list powered on vm
---------------------------
vim-cmd vmsvc/getallvms

esxcli vm process list

esxcli vm process kill -t soft -w 12345 (world id)


esxcfg-nics -l (to get complete host nic detail)
esxcli network ip interface ipv4 get
esxcli network ip dns server list (host dns detail)
esxcli network ip interface list
esxcli network ip connection list
vmkping 192.168.0.3 (to test connectivity between vmkarnel port)
vmkping 192.168.0.3 -d -s 9000 (to check jumbo frame connectivyt)

ESX top

to list all storage device detail for a host


-----------------------------------------------

esxcli storage core device list | egrep "^ *Display Name:|VAAI Status:"
esxcli storage filesystem list

command to fix seat partition of VC


/usr/lib/applmgmt/support/scripts/autogrow.sh

Enter the ncli host ls command to check the host ID. The host ID is shown in the Id
line.

ncli host edit id=4 enable-maintenance-mode=true (for putting cvm on mm)

cli host edit id=00058a90-eeba-aeb1-4dcc-xxxxxxxxxxxx enable-maintenance-mode=false

Enter the ncli host ls command to verify that the CVM is in maintenance mode, true
mean it is in MM

VCSA command
------------------

curl -v telnet://10.160.250.12:3289

-----------------------------------------------------------------------------------
---------------------------------------------------------------
$ allssh ncli alert ls
$ allssh df -h
$ allssh lsscsi
$ allssh list_disks
$ allssh "du -h -d 2 -x /home/nutanix/data |sort -h -r"
$ ncc health_checks hardware_checks disk_checks disk_online_check
$ ncc health_checks hardware_checks disk_checks metadata_mounted_check

---------------------------------
to shutdown CVM: cvm_shutdown -P now
to validate CVM add in ring: nodetool -h 0 ring
ncc health_checks run_all
cluster status | grep -v UP
ncc health_checks system_checks cvm_services_status

ssh root@glbdto-sp280002.novartis.net (command to connect host from cvm)

CVM exit from MM command ncli host edit id=6 enable-maintenance-mode="false"

svmips && hostips && ipmiips - Display all the IP's in the cluster
cluster status | grep -v UP - Display the status of the services in Control VM
nodetool -h 0 ring - Display the storage stack is online or not
ncli host list - Display the hypervisors list
ncli cluster get-redundancy-state - Display the Redundancy Factor value
allssh date - Display the date on ALL the Control VM's
hostssh date- Display the date on ALL the Hypervisors
ncli cluster info - Display the Cluster details (number of nodes - cluster ID -
VIP)
ncli ms list - Display the Hypervisor version
allssh ntpq -pn - Display the NTP stats
ncc --version - Display the Nutanix Cluster Check (NCC) version
ncc healthchecks run_all - Run the Nutanix Health Checks
ncc log_collector run_all - Run the Nutanix logs collection (Logbay is the future t

command to clear prism alert from cvm


-----------------------------------------------
alert_tool -op clear-alerts

allssh rm /home/nutanix/data/binary_logs/*
allssh rm /home/nutanix/data/binary_logscores/*
allssh rm /home/nutanix/data/data/ncc/installer/*
allssh rm /home/nutanix/data/log_collector/*
allssh rm /home/nutanix/data/logs/sysstats/*.tgz
allssh rm /home/nutanix/data/logs/*.tgz

allssh 'sudo journalctl --vacuum-size=512M'


allssh 'sudo sed -i 's/1024M/512M/' /etc/systemd/journald.conf'
allssh 'sudo systemctl restart systemd-journald'

allssh genesis stop ergon go_ergon; cluster start

to reset prism password from cvm


ncli user reset-password user-name='admin' password='NibrDocker4all!'

command to check if PC connected from cvm


nc -zv 147.167.116.57 9300

https://avalara.udemy.com/course/microsoft-certified-azure-administrator/learn/
lecture/37186360#overview

-----------------------------------------------

Hi All

Please find below workaround from Nutanix in case you see high CPU utilization on
host and its getting time out, below steps are used to kill get_one_time_password
script, it can be executed from impacted esxi host

Workaround :
From Nutanix side, official workaround for that issue is the following :

This command can be used to identify the processes which are being consumed by
get_one_time_password
for pid in $(ps -Tcjstv | grep get_one_time_password | grep -v grep | awk '{print
$1}'); do echo $pid; done

This command can be used to kill the processes on the host to reduce its CPU
utilization
for pid in $(ps -Tcjstv | grep get_one_time_password | grep -v grep | awk '{print
$1}'); do kill $pid; done

----------------------------------------------------------------------

• Instructions to identify and delete the backup tool created snapshots.


1. Connect vCenter through PowerShell from super jump.
2. You can find the backup snapshot by executing the below command. Three commands
provides as backup is configured with three different vendors for now so choose the
correct one.
NetBackup configured VC's: -------------------- Get-VM | Get-
Snapshot | Where { $_.Name -like "NBU_SNAPSHOT*" } | Format-Table -Property VM,
Name, Created, Description, SizeMB -AutoSize
TSM Configured VC's: -------------------- Get-VM | Get-
Snapshot | Where { $_.Name -like "TSM-VM*" } | Format-Table -Property VM, Name,
Created, Description, SizeMB -AutoSize
Commvault configured VC's: -------------------- Get-VM | Get-
Snapshot | Where { $_.Name -like "__GX_BACKUP__*" } | Format-Table -Property VM,
Name, Created, Description, SizeMB -AutoSize (OR) Get-VM | Get-Snapshot |
Where { $_.Description -like "Snapshot created by Commvault*" } | Format-Table -
Property VM, Name, Created, Description, SizeMB –AutoSize

3. Delete the snapshot using below command. Three commands provides as backup is
configured with three different vendors for now so choose the correct to one.
NetBackup configured VC's: ------------------- Get-VM | Get-
Snapshot | Where {$_.Name -like "NBU_SNAPSHOT*"} | Remove-Snapshot -Confirm:$false
-RunAsync
TSM Configured VC's: -------------------- Get-VM | Get-
Snapshot | Where {$_.Name -like "TSM-VM*"} | Remove-Snapshot -Confirm:$false -
RunAsync
Commvault configured VC's: -------------------- Get-VM | Get-
Snapshot | Where {$_.Name -like "__GX_BACKUP__*"} | Remove-Snapshot -Confirm:
$false -RunAsync (OR) Get-VM | Get-Snapshot | Where {$_.Description -like
"Snapshot created by Commvault*"} | Remove-Snapshot -Confirm:$false –RunAsync

You might also like