ESX Server, NIC Teaming, and VLAN Trunking
Before we get into the details, allow me to give credit where credit is due. First, thanks to Dan Parsons of IT Obssion for an article that jump-started the process with notes on the Cisco IOS configuration. Next, credit goes to the VMTN Forums, especially this thread四川历年高考分数线, in which some extremely uful information was exchanged. I would be remiss if I did not adequately credit the sources for the information that helped make this testing successful.
There are actually two different pieces described in this article. The first is NIC teaming, in which we logically bind together multiple physical NICs for incread throughput and incread fault tolerance. The cond is VLAN trunking, in which we configure the physical switch to pass VLAN traffic directly to ESX Server, which will then distribute the traffic according to the port groups and VLAN IDs configured on the rver. I wrote about ESX and VLAN trunking a long time ago and ran into some issues then; here I’ll describe how to work around the issues I ran into at that time.
mnm
软件翻译工具
So, let’s have a look at the two pieces. We’ll start with NIC teaming.
Configuring NIC Teaming
There’s a bit of confusion regarding NIC teaming in ESX Server and when switch support is required. You can most certainly create NIC teams (or “bonds”) in ESX Server without any switch support whatsoever. Once tho NIC teams have been created, you can configure load balancing and failover policies. However, tho policies will affect outbound traffic only.丹田发声 In order to control inbound traffic, we have to get the physical switches involved. This article is written from the perspective of using Cisco Catalyst IOS-bad physical switches. (In my testing I ud a Catalyst 3560.)
To create a NIC team that will work for both inbound and outbound traffic, we’ll create a port channel using the following commands:
reroll
s3(config)#int port-channel1
s3(config-if)#description NIC team for ESX rver
s3(config-if)#int gi0/23
s3(config-if)#channel-group 1 mode on
学习意大利语s3(config-if)#int gi0/24
s3(config-if)#channel-group 1 mode on
This creates port-channel1 (you’d need to change this name if you already have port-channel1 defined, perhaps for switch-to-switch trunk aggregation) and assigns GigabitEthernet0/23 and GigabitEthernet0/24 into team. Now, however, you need to ensure that the load balancing mechanism that is ud by both the switch and ESX Server matches. To find out the switch’s current load balancing mechanism, u this command in enable mode:
show etherchannel load-balance
This will report the current load balancing algorithm in u by the switch. On my Catalyst 3560 running IOS 12.2(25), the default load balancing algorithm was t to “Source MAC Address”. On my ESX Server 3.0.1 rver, the default load balancing mechanism was t
create table
to “Route bad on the originating virtual port ID”. The result? The NIC team didn’t work at all—I couldn’t ping any of the VMs on the host, and the VMs couldn’t reach the rest of the physical network. It wasn’t until I matched up the switch/rver load balancing algorithms that things started working.
To t the switch load-balancing algorithm, u one of the following commands in global configuration mode:
port-channel load-balance src-dst-ip (to enable IP-bad load balancing)
port-channel load-balance src-mac (to enable MAC-bad load balancing)
There are other options available, but the are the two that em to match most cloly to the ESX Server options. I was unable to make this work at all without switching the configuration to “src-dst-ip” on the switch side and “Route bad on ip hash” on the ESX Server side. From what I’ve been able to gather, the “src-dst-ip” option gives you better utilization across the members of the NIC team than some of the other options. (Anyone care to contribute a URL that provides some definitive information on that statement?)
Creating the NIC team on the ESX Server side is as simple as adding physical NICs to the vSwitch and tting the load balancing policy appropriately. At this point, the NIC team should be working.
Configuring VLAN Trunking
In my testing, I t up the NIC team and the VLAN trunk at the same time. When I ran into connectivity issues as a result of the mismatched load balancing policies, I thought they were VLAN-related issues, so I spent a fair amount of time troubleshooting the VLAN side of things. It turns out, of cour, that it wasn’t the VLAN configuration at all. (In addition, one of the VMs that I was testing had some issues as well, and that contributed to my initial difficulties.)
To configure the VLAN trunking, u the following commands on the physical switch:
s3(config)#int port-channel1
s3(config-if)#switchport trunk encapsulation dot1q
考研英语分数线s3(config-if)#switchport trunk allowed vlan all
s3(config-if)#switchport mode trunk
s3(config-if)#switchport trunk native vlan 4094
eico
This configures the NIC team (port-channel1, as created earlier) as a 802.1q VLAN trunk. You then need to repeat this process for the member ports in the NIC team: