I can’t be the first one confused about jumbo, mtu, and system mtu on Nexus 1000v. After reading some excellent posts, all signs were indicating that “system mtu” was designed to solve the “chicken and egg” problem of running VSM on IP storage.
Like "system vlan", “system mtu” applies to system uplink profile only. So if VSM is not even running under VEM (it runs on vSwitch), there is no need to set “system mtu”, right?
Well, not quite. It turns out system mtu is still needed to preserve the connection to VEM. Assuming jumbo is used (for storage as an example), reboot of ESX will revert the physical NIC to default MTU (1500), which results in mismatched MTU between physical NIC and virtual NIC, and loss of connectivity. “system mtu” preserves the setting on physical NIC, and thus preventing VEM from disappearing.
To further clarify, here is an example of configuring jumbo of 9000 on Nexus 1000v
1. “system jumbomtu 9000” (global)
2. “system mtu 9000” (uplink port profile)
That is all. Note once set, “system mtu” overwrites “mtu”, therefore there is no need to set interface mtu explicitly.
A couple of things potentially confusing:
-The show commands on Nexus 1000v is not exactly accurate for MTU, fix is supposed to be coming
-There is an error in Cisco Nexus command reference, which states “The value that is configured for system mtu command must be less then value configured in the system jumbomtu command”. It should be “less or equal to”; There are no reason to set system mtu to 8998 unless hardware dictates so.
I hope that clear up some confusions. If you notice any behavior inconsistent with this understanding, please kindly let me know.
No comments:
Post a Comment