Professional Documents
Culture Documents
83935-How To Deploy A Nexus 1000v Lab With VMware Workstation
83935-How To Deploy A Nexus 1000v Lab With VMware Workstation
*Details of third party OS & application installation is beyond the scope of this guide.
*This guide will use a single flat private network. Advanced options can include separate (VLAN)
networks.
Workstation Requirements:
-Java installed
a. VMware workstation should be previously installed and working. You’ll want at least 75
GB of storage on your workstation for VMs.
b. Create a new Virtual Machine, using the Custom (advanced) option – Next.
c. Set the HW compatibility to Workstation 9.0 – Next.
d. Set the path to your ESXi 5.x installation ISO - Next
e. Name your VM and set the location – Next
f. Set 2 processors, 1 core – Next
g. Configure at least 2GB memory. (Depends on total available memory in system) – Next
h. Set networking to use “Host-Only”. We don’t need external access for our VMs.
i. Keep LSI Logic (default) storage controller – Next
j. Create new virtual disk – Next
k. SCSI (default) – Next
l. Set disk size (min. 10GB) or more if you plan to host VMs within the VMFS of Nested ESX
host – Next.
m. Keep default disk file name - Next
n. Review configuration check to power on VM – Finish
You may see warnings like these if you haven’t enabled Intel VT-d and Intel
Virtualization support in your workstation’s BIOS:
g. Review configuration and click Next. Be patient, deployment will take up to 15mins.
h. Next step will prompt to add additional Modules.
i. Select the hosts you wish to have the VEM agent installed. Click Next.
Note: This method requires VUM to be previously installed. If not, you’ll need to manually install the VEM
agent vibs.
You can monitor the progress from the VI Client Recent Task log
k. The Install App hopefully completed successfully for all hosts.
**If the VEM installation fails, it likely points to a problem with VMware Update
Manager (VUM).
l. From the VI Client go to Home -> Inventory -> Networking and you should see your two
new hosts as part of the 1000v DVS. Ensure you click on the 1000v DVS in the left pane.
-- ------------------ ------------------------------------------------
1 4.2(1)SV2(1.1) 0.0
2 4.2(1)SV2(1.1) 0.0
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
4 02-00-0c-00-04-00 to 02-00-0c-00-04-80 NA
1 10.85.49.220 NA NA
2 10.85.49.220 NA NA
a. Cold migrate (powered off) your Test VM to one the Nested ESX hosts. If you get any errors, you’ve likely done one of
the following:
- Didn’t modify your Nested ESX VM to Virtual Machine Version 9 prior to install
- Didn’t enable the “Expose NX/XD flag to guest” option in the Nested ESX VM.
b. Before we power it up we’re going to create a Port Profile for it on the 1000v.
c. Now change the virtual network binding of your test VM from the vSwitch to the 1000v port profile.
d. Power up your test VM, and verify the interface on the 1000v.
-------------------------------------------------------------------------------
Port Adapter Owner Mod Host
-------------------------------------------------------------------------------
Veth1 Net Adapter 1 RHEL62-Test-1 3 10.85.49.218
N1000v(config)#
Assuming your networking & port profiles are setup correctly you should have connectivity to your Test VM.
5. Advanced Configuration (optional)
a. Now that we have basic connectivity, let’s add the remaining uplinks to your Nested ESX VEM hosts.
Select the host – Configuration – Networking – vSphere Distributed Switch tab – Manage Physical Adapters
b. Find the uplink port profile and click “Add NIC”. Add each of the 2 remaining NICs from each host.
c. Verify the uplinks on the 1000v. Your uplink port profile should be configured for mac pinning in which case you should
see two new Port channels automatically created.
--------------------------------------------------------------------------------
Port VRF Status IP Address Speed MTU
--------------------------------------------------------------------------------
mgmt0 -- up 10.85.49.220 1000 1500
--------------------------------------------------------------------------------
Ethernet VLAN Type Mode Status Reason Speed Port
Interface Ch #
--------------------------------------------------------------------------------
Eth3/2 711 eth trunk up none 1000 1
Eth3/3 711 eth trunk up none 1000 1
Eth3/4 711 eth trunk up none 1000 1
Eth4/2 711 eth trunk up none 1000 2
Eth4/3 711 eth trunk up none 1000 2
Eth4/4 711 eth trunk up none 1000 2
--------------------------------------------------------------------------------
Port-channel VLAN Type Mode Status Reason Speed Protocol
Interface
--------------------------------------------------------------------------------
Po1 711 eth trunk up none a-1000(D) none
Po2 711 eth trunk up none a-1000(D) none
<snip>
6. Exercise - Determine which uplink your Test VM is utilizing
-------------------------------------------------------------------------------
Port Adapter Owner Mod Host
-------------------------------------------------------------------------------
Veth1 Net Adapter 1 RHEL62-Test-1 3 10.85.49.218
ii. Identify the Sub Group IDs of all uplinks on that host.
You can see from the output, SGID 1 = vmnic1, 2 = vmnic2 and 3 = vmnic3
iii. Find the VMs pinned Sub Group ID from the same ouput.
From this we can see that the Test VM is assigned to SGID 2, which will use vmnic2 for external
communication.
a. First determine what the SGID of vmnic1 is. From our previous output, this would be SGID “1”.
b. Configure either the port profile or the individual interface to “prefer” this Sub Group.
N1000v(config)# port-profile rhel-pp
N1000v(config-port-prof)# pinning id 1
8. Explore & play with various features - ACLs, QoS, PVLANs, etc!