Disclaimer:
- This is for lunar.lab only and not intended as a guide for production environment. I’m aware that some configurations I use here are not supported configuration.
- All deployed on vSphere environment.
- I use nested environment.
- The official documentation is here: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-414C33B3-674F-44E0-94B8-BFC0B9056B33.html
- The documentation is quite clear, I just add some notes specific to my deployment.
Step-by-step:
- Deploy
- NSX-T Manager: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-FA0ABBBD-34D8-4DA9-882D-085E7E0D269E.html
- NSX-T Controller (for lab purpose I only deploy 1 NSX-T Controller): https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-24428FD4-EC8F-4063-9CF9-D8136740963A.html
- NSX-T Edge (for lab purpose I only deploy 1 NSX-T Edge): Please note that PKS requires Large (8 vCPU 16GB RAM) size of NSX-T Edge Node. https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-AECC66D0-C968-4EF2-9CAD-7772B0245BF6.html
- During the wizard, you’ll need to select which portgroup you want to use for each NSX Edge interface, my configuration goes like this:
- Network 0 connect to management network, which is connect to the same port group as NSX Manager, NSX Controller, vCenter Server, ESXi management connect to.
- Network 1 connect to dedicated portgroup which will be used for overlay network (GENEVE).
- This is where you need to ensure that these portgroup is connected to a port configured with minimum MTU 1600.
- Network 2 connect to portgroup which will be the uplink to external network (VLAN base).
- Network 3 is not used, but you still need to connect it to a portgroup
- For more information, please refer to this document: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-370D06E1-1BB6-4144-A654-7AF2542C3136.html
- By default, NSX-T components deployed will have their CPU and memory reservation configured. You may want to remove it for your lab environment.
- Join NSX Controller with NSX Manager: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-05434745-7D74-4DA8-A68E-9FE17093DA7B.html
- Command to enable SSH: service start ssh
- Initialize the Control Cluster to create a Control Cluster Master: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-273F6344-7212-4105-9FBA-A872CD75803F.html#GUID-273F6344-7212-4105-9FBA-A872CD75803F
- If HA required, this step should be continued with joining additional NSX Controllers with the Cluster Master: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-8A8394EB-9D4A-4F13-AE91-8CFDD10334D4.html#GUID-8A8394EB-9D4A-4F13-AE91-8CFDD10334D4
- Join NSX Edge with Management Plane: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-11BB4CF9-BC1D-4A76-A32A-AD4C98CBF25B.html
- Create Transport Zone: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-F739DC79-4358-49F4-9C58-812475F33A66.html
- Create TZ-Overlay
- NVDS Name: NVDS-Overlay
- NVDS Type: Standard
- Traffic Type: Overlay
- Create TZ-VLAN
- NVDS Name: NVDS-VLAN
- NVDS Type: Standard
- Traffic Type: VLAN
- Create Uplink Profile: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-50FDFDFB-F660-4269-9503-39AE2BBA95B4.html
- Create New Uplink Profile
- Name: Single Uplink
- Active Uplinks: Uplink01 (Teamings)
- This is only a profile, you’ll need to associate this profile with specific interface in the coming step.
- Create Tunnel Endpoint IP Pool: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-E7F7322D-D09B-481A-BD56-F1270D7C9692.html
- Add New IP Pool
- Name: TEP IP Pool
- Add Subnets
- IP Range: 192.168.99.11-192.168.99.30
- CIDR: 192.168.99.0/24
- Add a Compute Manager: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-D225CAFC-04D4-44A7-9A09-7C365AAFCA0E.html
- Add vCenter Server which manages target compute nodes to NSX Fabric.
- Install NSX on ESXi Hosts
- At NSX Manager, go to Fabric > Nodes > Hosts [1]
- Change Managed by to the vCenter Server added on previous step [2]
- Expand the discovered vSphere Cluster [3]
- Select all ESXi hosts needed to participate as NSX compute node [4]
- Click on Install NSX [5]
- The ESXi Hosts should have minimum 8GB of RAM per host. I try with only 6GB of RAM per host, the installation failed.
Figure 1 - Install NSX on ESXi Hosts
- Add Transport Nodes: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-53295329-F02F-44D7-A6E0-2E3A9FAE6CF9.html#GUID-53295329-F02F-44D7-A6E0-2E3A9FAE6CF9
- Create NSX Edge Transport Node
- Select NSX Edge as Node
- Select both Transport Zones [1]
Figure 2 - Select both Transport Zones for NSX Edge
- TZ-Overlay
- Edge Switch Name: NVDS-Overlay [2]
- Uplink Profile: Single Uplink [3]
- IP Assignment: Use IP Pool [4]
- IP Pool: TEP IP Pool [5]
- Virtual NIC: fp-eth0 [6] as Uplink01 [7]
- This is Network 1 you configure in the beginning
Figure 3 - Configure NVDS-Overlay for NSX Edge Transport Node
- TZ-VLAN
- Edge Switch Name: NVDS-VLAN [8]
- Uplink Profile: SIngle Uplink [9]
- Virtual NIC: fp-eth1 [10] as Uplink01 [11]
- This is Network 2 you configure in the beginning
Figure 4 - Configure NVDS-VLAN for NSX Edge Transport Node
- Create Host Transport Node: For each ESXi Host add it as Transport Node
- Select TZ-Overlay only [1]
Figure 4 - Select Transport Zone TZ-Overlay only for Host Transport Node
- Edge Switch Name: NVDS-Overlay [2]
- Uplink Profile: Single Uplink [3]
- IP Assignment: Use IP Pool [4]
- IP Pool: TEP IP Pool [5]
- Physical NIC: Select vmnic you want to use as overlay network uplink for Uplink01 [6]
Figure 5 - Configure NVDS-Overlay for Host Transport Node
- NOTE: You also can define Uplink Profile and IP Pool within this step
- Create NSX Edge Cluster: https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-898099FC-4ED2-4553-809D-B81B494B67E7.html#GUID-898099FC-4ED2-4553-809D-B81B494B67E7
Ok, done for the deployment and initial configuration.
Not that hard!
ReplyDelete