Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 2

Cisco Virtual Interface Card nfnic driver Release Notes

Driver Name : nfnic


Driver Version: 5.0.0.43

Compatible ESX Version(s):


ESXi 7.0, including all updates available as of this release

Dependencies:
Cisco UCS Virtual Interface Card 15xx firmware version: 5.2.3x (FC-NVMe and FC-SCSI
support)
Cisco UCS Virtual Interface Card 14xx firmware version: 5.2.3x (FC-NVMe and FC-SCSI
support)
Cisco UCS Virtual Interface Card 13xx firmware version: 4.5.3x (FC-SCSI support)
Cisco UCS Virtual Interface Card 12xx firmware version: 4.4.1x (FC-SCSI support)

New Features:
None

Fixes and enhancements (since nfnic 5.0.0.40):


CSCwh58137 : Support VVol secondary LUN reset
CSCwh50641 : Change back log level in fnic_discover
CSCwh22665 : ESXi PSOD during VMK_ASSERT(io_req) when io_req is NULL
CSCwh20158 : Update Riga, Turku+, Zurich ( Beverly) Adaptor Models for FDMI ESX
FNIC Output
CSCwf80532 : NFNIC Logging - ADISC BA_RJT reason code value switched with reason
code explanation.
CSCwf45325 : ESXi PSOD during SCSIUnclaimPath after fnic_queuecommand
CSCwe63624 : Support VIC15235 and VIC15425 Adaptor for FDMI

Known Issues and Workarounds:

1)FC-NVMe: Currently ESXi7.0 only supports namespace block size 512B.


It does not support default 4KB block size which some storage vendors provides.

2)FC-NVMe: The below URL shows to how to enable FC-Nvme with ANA on Esxi 7.0
https://docs.netapp.com/us-en/ontap-sanhost/nvme_esxi_7.html#validating-nvmefc

3)FC-NVMe:Config changes required for BUS BUSY issue(failed H:0x2 D:0x0 P:0x0)

Follow the below steps to resolve (H:0x2 D:0x0 P:0x0) error in vmkernel.log
----------------------------------------------------------------------------
Run the below cmd to display all controllers discovered from Esxi Host.
#Esxcli nvme controller list

Then Run the below cmd to check number of I/O Qs and Qsize of any one controller.
#vsish -e get /vmkModules/vmknvme/controllers/<controller number>/info

All Controllers on the same target will support same Qsize.


E.g.
Number of Queues:4
Queue Size:32

Suggested tuning from VMs :-


Change the queue_depth of all nvme device on the VMs to same as controller 'Queue
Size'.

E.g. in a Rhel setup.


# Echo 32 > /sys/block/sdb/ device/queue_depth

Then do cat /sys/block/sdb/ device/queue_depth to verify queue_depth set to 32.

Without the queue_depth tuning the throughput may decrease with I/O stress ,
or customer may also see BUS_BUSY errors (failed H:0x2 D:0x0 P:0x0) in vmkernel log
file.

Additional configuration options supported by the driver:

Customer Need to set the "Adapter Policy" to "FCNVMeInitiator" to create a FCNVME


Adapter.
"Adapter Policy" can be found under Server "Service Profile -> Storage -> Modify
vHBAs"

You might also like