Суть ВиЛАНов в следующем - вот есть 2 ВМ. Они могут взаимодействовать
между собой по сети. А нам это не надо - например, из соображений
безопасности. Мы можем зайдествовать VLAN - тогда коммутатор, кроме
MAC\IP адресов будет проверять и отдельное поле, в котором прописан
т.н. VLAN ID - цифирка от 0 до 4094. И пакеты с неправильным VLAN ID не
будут попадать в те порты, куда не надо. В моем примере, мы можем
создать на одном vSwitc 2 группы портов, назначить им разные VLAN и
забыть о проблеме безопасности - эти 2 ВМ, т.к. они из разных VLAN, друг
с другом никак не повзоимодействуют, при всем желании. Эти две машины
из примера могут быть и физическими. Или физической может быть только
одна из них - неважно.
Суть работы с VLAN в ESX - для группы
портов можем указать "VLAN ID". Теперь все пакеты с другим VLAN ID или
без VLAN ID не будут транслироваться в эти виртуальные порты.
Если у всех групп портов указан какой то VLAN ID, то нетегированный(без VLAN ID) траффик в принципе будет отбрасываться ESX'ом. Если хоть одна група портов настроена не использовать VLAN , то нетегированный трафик отбрасываться не будет:
Если пакет с VLAN ID, совпадающим с VLAN ID одной из групп портов - он туда будет переправлен.
Если пакет с VLAN ID, не совпадающим с VLAN ID ни одной из групп портов - он будет отброшен.
Если пакет без VLAN ID - он будет отправлен только в те группы портов, для которых указанно не использовать VLAN.
ESX поддерживает VLAN. Но, организовать эти самые VLAN можно по разному:
Тегирование происходит на виртуальных коммутаторах - Virtual Switch Tagging (VST).
Тегирование происходит на физических коммутаторах - External Switch Tagging (EST).
Тегирование происходит в ВМ - надо доставить соответствующие драйвера - Virtual Guest Tagging (VGT)
VST
- физические коммутаторы воспринимают виртуальные коммутаторы как себе
подобных. Просто от них приходит тегированный трафик. И физический
коммутатор должен нормально его воспринимать - т.е. физические сетевушки
ESX должны быть воткнуты в "транковые" порты коммутатора. На стороне
ESX должны существовать хотя бы по одной группе портов для каждого
используемого VLAN.
EST - все тегирование делается на физических
коммутаторах - т.е. их порты соответственно настроены. В этом случае,
каждый VLAN должен иметь отдельную сетевушку(со стороны ESX), и,
следовательно, отдельный vSwitch. Этот способ хорош своей
естественностью - точно так мы поступаем безо всякой виртуализации.
Только сетевушек в ESX может потребоваться много.
VGT -
коммутаторы пропускают кадры с тегами сквозь себя, прямо до гостевых ОС.
Чтобы вКомм(аналог vSwitch, я утомился писать целиком это
словосочетание :)) работали в таком режиме, достаточно одной группы
портов для всех ВМ из разных VLAN, а в качестве VLAN ID надо указать 4095.
Когда что лучше использовать:
Само
собой, у этих подходов есть плюсы и минусы, и какой будет лучше - очень
сильно зависит от существующей инфраструктуры. Например, если есть
необходимость участия ВМ в нескольких VLAN, подход VGT позволит
организовать требуемое проще всего. Я, кстати, не знаю - а есть ли такие
драйвера под Windows? VST и EST могут использоваться одновременно -
опять таки, танцуем от задачь и возможностей внешней инфраструктуры.
VMware
рекомендует не использовать Native VLAN на физической стороне, если
VLAN не используются. Еще есть мнение, что не стоит в качестве VLAN ID
использовать "1", т.к. таково VLAN ID для Cisco Native VLAN.
Поповодугруппировкиконтроллеров - встретилсясмнением, что "VMWare misleads people to believe that their nic teaming is LACP", и "However, ether channels work better per our testing.". Т.е. лучше делать как в ссылке "два" внизу - настроить ether channel на циске и на стороне ESX включить балансировку по IP hash.
Ссылки по теме - раз, два(тут описывается опыт настройки группировки, + VLAN, и интересные комменты), три, четыре.
It was December 2006 when I first published this article on using NIC teams and VLANs with ESX Server.
As you can see in the "Top Posts” section in the sidebar, that article
has since claimed the top position in the most popular post here on this
blog. Note that "most popular” does not translate into "most
commented”; that distinction falls to one of the Linux-AD integration
articles, although I not sure whichone right at the moment.
In that previous article, I demonstrated the use of a "dummy VLAN” which was set as the native VLAN for the VLAN trunk, like so:
The idea behind the dummy VLAN was this: because ESX Server needed—or
so I thought—all the traffic to be tagged as it came across the VLAN
trunk, creating a VLAN that is never used and setting it as the native
VLAN solves our problem. Remember that the native VLAN is the VLAN whose
traffic is not tagged as it travels across the trunk into ESX Server or another physical switch.
It turns out that I was actually mistaken—sort of. It’s true that the
native VLAN won’t get tagged, yes; however, it’s not true that ESX
Server requires all the traffic to be tagged. What was missing in my
configuration was, quite simply, a port group that was intended to
receive untagged traffic.
Configuring ESX Server to support VLANs involves the creation of one
or more port groups configured with matching VLAN IDs. If a port group
has a VLAN ID, it will essentially only accept traffic tagged with that
VLAN ID. Traffic not tagged with that VLAN ID, or untagged traffic, will
be ignored. So, if you create a series of port groups on a vSwitch for
your various VLANs but neglect to create a port group that does not have
a VLAN ID specified, untagged traffic will be ignored because there are
no port groups configured to receive untagged traffic.
If, on the other hand, you create a series of port groups for your various VLANs and
you create a port group that does not have a VLAN ID specified, then
both tagged and untagged traffic will be handled correctly:
Tagged traffic with a VLAN ID matching one of the configured port groups will be sent to that port group
Tagged traffic with a VLAN ID not matching any configured port group will be ignored
Untagged traffic will be directed to the port group that does not have a VLAN ID configured
Now, it is true that VMware’s best practices documents (sorry, I
don’t have a link for them at the moment) recommend that users avoid the
use of the native VLAN, and one of the CCIEs in my office indicated
that it is also considered a networking best practice to avoid the use
of VLAN 1, the default native VLAN on Cisco equipment, for anything
other than switch management traffic. With those things in mind, it may
not be an issue for many deployments.
Except…
…when using automated scripts to build and install ESX Server. You
see, after ESX Server is installed, then specifying a VLAN ID on the
Service Console port group is no big deal and it will work just fine, as
I described earlier. Before ESX Server is installed, though,
there is no VLAN support and no way to specify a VLAN ID. Hence,
installations that need to download and install from a FTP server or an
NFS mount will fail, because the system won’t have any network
connectivity. (Everyone understands why, right? If you don’t, go back
and read the earlier paragraphs again.)
What’s the fix here? We come back, full circle, to the idea of the
default VLAN and untagged traffic. While the system won’t accept any
tagged traffic during the install process, it will happily accept
untagged traffic during the installation. Therefore, if you set the
native VLAN to be the VLAN to which the Service Console should be
connected once the installation is complete, then everything should work
just fine.
Don’t believe me? From the "Show Me” state? Perform this quick test yourself:
On a test ESX Server, configure the Service Console port group with a
VLAN ID of 0. The "esxcfg-vswitch” command is handy for this.
Set the switch port to which the Service Console is physically
connected to use a native VLAN different than the VLAN the Service
Console was previously using. A VLAN with DHCP present is ideal, as
you’ll see with the next step.
Using the "dhclient” command, try to obtain a DHCP lease. You should
get a DHCP lease for whatever subnet matches the default VLAN.
Repeat steps 2 and 3 and you should see the DHCP lease follow the
native VLAN configuration, i.e., whatever VLAN is set to native will be
the VLAN that issues a DHCP address to your Service Console.
Hopefully, this helps clear up some of the misunderstanding and
confusion around the use of VLANs, VLAN trunks, port groups, and the
native—or untagged—VLAN. Feel free to hit me up in the comments if you
have any questions!
I also suggest use of native vlans. In our initial data center
rollout for a financial company, i needed to keep costs down, so i
resolved to reduce the number of high end switch hardware we bought.
Using native vlans properly allowed me to do this while mitigating the
‘vlan hopping’ exploit.
Thanks for your feedback. I’m wondering if you’d be willing to
disclose a little bit of information on how the judicious use of native
VLANs allowed you to accomplish your goals? It would, I think, be useful
information to a lot of readers.
Its actually very simple, …and classic. We have three zones.
Public, DMZ, and Inside. I created vlan 20 for Public, 30 for DMZ, and
vlan 40 for Inside. Then i created vlan dummy_vlan2, dummy_vlan3, and
dummy_vlan4, then i went into the vlan database and suspended each.
Then, I went to each access port (via the interface-range qualifier) and
assigned the native vlan and set the vlan membership explicitly, nearly
the same as you showed in your article (you showed an example of
allowing all vlans). Finally, I explicited allowed only vlans 20,30, and
40 on the Trunk.
I hope this is the appropriate place for it, but here is how I set it up, straight from the documentation I wrote.
——
+ Set up the Aggregated Ports
Set up PAGP on Switch01 side
# conf t
(config)# interface range gi 1/43 – 44
(config-if-range)# channel-group 1 mode desirable
Set up PAGP on Switch02 side
(config)# interface range gi 1/43 – 44
(config-if-range)# channel-group 1 mode desirable
port-channel 1 is the resulting virtual interface
The following command will verify what has been set up.
# show etherchannel summary
(config)# vlan 2
(config-vlan)# name dummy_vlan2
(config)# vlan 3
(config-vlan)# name dummy_vlan3
(config)# vlan 4
(config-vlan)# name dummy_vlan4
(config)# end
Suspend dummy_vlan’s
#vlan database
#vlan 2 state suspend
#vlan 3 state suspend
#vlan 4 state suspend
Before we get into the details, allow me to give credit where credit is due. First, thanks to Dan Parsons of IT Obsession for an article that jump-started the process with notes on the Cisco IOS configuration. Next, credit goes to the VMTN Forums, especially this thread,
in which some extremely useful information was exchanged. I would be
remiss if I did not adequately credit these sources for the information
that helped make this testing successful.
There are actually two different pieces described in this article.
The first is NIC teaming, in which we logically bind together multiple
physical NICs for increased throughput and increased fault tolerance.
The second is VLAN trunking, in which we configure the physical switch
to pass VLAN traffic directly to ESX Server, which will then distribute the traffic according to the port groups and VLAN IDs configured on the server. I wrote about ESX and VLAN trunking a long time ago and ran into some issues then; here I’ll describe how to work around the issues I ran into at that time.
So, let’s have a look at these two pieces. We’ll start with NIC teaming.
Configuring NIC Teaming
There’s a bit of confusion regarding NIC teaming in ESX Server and
when switch support is required. You can most certainly create NIC teams
(or "bonds”) in ESX Server without any switch support whatsoever. Once
those NIC teams have been created, you can configure load balancing and
failover policies. However, those policies will affect outbound traffic only.
In order to control inbound traffic, we have to get the physical
switches involved. This article is written from the perspective of using
Cisco Catalyst IOS-based physical switches. (In my testing I used a
Catalyst 3560.)
To create a NIC team that will work for both inbound and outbound
traffic, we’ll create a port channel using the following commands:
s3(config)#int port-channel1
s3(config-if)#description NIC team for ESX server
s3(config-if)#int gi0/23
s3(config-if)#channel-group 1 mode on
s3(config-if)#int gi0/24
s3(config-if)#channel-group 1 mode on
This creates port-channel1 (you’d need to change this name if you
already have port-channel1 defined, perhaps for switch-to-switch trunk
aggregation) and assigns GigabitEthernet0/23 and GigabitEthernet0/24
into team. Now, however, you need to ensure that the load balancing
mechanism that is used by both the switch and ESX Server matches. To
find out the switch’s current load balancing mechanism, use this command
in enable mode:
show etherchannel load-balance
This will report the current load balancing algorithm in use by the
switch. On my Catalyst 3560 running IOS 12.2(25), the default load
balancing algorithm was set to "Source MAC Address”. On my ESX Server
3.0.1 server, the default load balancing mechanism was set to "Route
based on the originating virtual port ID”. The result? The NIC team
didn’t work at all—I couldn’t ping any of the VMs on the host, and the
VMs couldn’t reach the rest of the physical network. It wasn’t until I
matched up the switch/server load balancing algorithms that things
started working.
To set the switch load-balancing algorithm, use one of the following commands in global configuration mode:
There are other options available, but these are the two that seem to
match most closely to the ESX Server options. I was unable to make this
work at all without switching the configuration to "src-dst-ip” on the
switch side and "Route based on ip hash” on the ESX Server side. From
what I’ve been able to gather, the "src-dst-ip” option gives you better
utilization across the members of the NIC team than some of the other
options. (Anyone care to contribute a URL that provides some definitive
information on that statement?)
Creating the NIC team on the ESX Server side is as simple as adding
physical NICs to the vSwitch and setting the load balancing policy
appropriately. At this point, the NIC team should be working.
Configuring VLAN Trunking
In my testing, I set up the NIC team and the VLAN trunk at the same
time. When I ran into connectivity issues as a result of the mismatched
load balancing policies, I thought they were VLAN-related issues, so I
spent a fair amount of time troubleshooting the VLAN side of things. It
turns out, of course, that it wasn’t the VLAN configuration at all. (In
addition, one of the VMs that I was testing had some issues as well, and
that contributed to my initial difficulties.)
To configure the VLAN trunking, use the following commands on the physical switch:
This configures the NIC team (port-channel1, as created earlier) as a
802.1q VLAN trunk. You then need to repeat this process for the member
ports in the NIC team:
If you haven’t already created VLAN 4094, you’ll need to do that as well:
s3(config)#int vlan 4094
s3(config-if)#no ip address
The "switchport trunk native vlan 4094″ command is what fixes the problem I had last time I worked with ESX Server and VLAN trunks;
namely, that most switches don’t tag traffic from the native VLAN
across a VLAN trunk. By setting the native VLAN for the trunk to
something other than VLAN 1 (the default native VLAN), we essentially
force the switch to tag all traffic across the trunk. This allows ESX
Server to handle VMs that are assigned to the native VLAN as well as
other VLANs.
On the ESX Server side, we just need to edit the vSwitch and create a
new port group. In the port group, specify the VLAN ID that matches the
VLAN ID from the physical switch. After the new port group has been
assigned, you can place your VMs on that new port group (VLAN)
and—assuming you have a router somewhere to route between the VLANs—you
should have full connectivity to your newly segregated virtual machines.
Final Notes
I did encounter a couple of weird things during the setup of this
configuration (I plan to leave the configuration in place for a while to
uncover any other problems).
First, during troubleshooting, I deleted a port group on one vSwitch
and then re-created it on another vSwitch. However, the virtual machine
didn’t recognize the connection. There was no indication inside the VM
that the connection wasn’t live; it just didn’t work. It wasn’t until I
edited the VM, set the virtual NIC to a different port group, and then
set it back again that it started working as expected. Lesson learned:
don’t delete port groups.
Second, after creating a port group on a vSwitch with no VLAN ID,
one of the other port groups on the same vSwitch appeared to "lose” its
VLAN ID, at least as far as VirtualCenter was concerned. In other words,
the VLAN ID was listed as "*” in VirtualCenter, even though a VLAN ID
was indeed configured for that port group. The "esxcfg-vswitch -l”
command (that’s a lowercase L) on the host still showed the assigned
VLAN ID for that port group, however.
It was also the "esxcfg-vswitch” command that helped me troubleshoot
the problem with the deleted/recreated port group described above. Even
after recreating the port group, esxcfg-vswitch still showed 0 used
ports for that port group on that vswitch, which told me that the
virtual machine’s network connection was still somehow askew.
Hopefully this information will prove useful to those of you out
there trying to set up NIC teaming and/or VLAN trunking in your
environment. I would recommend taking this one step at a time, not all
at once like I did; this will make it easier to troubleshoot problems as
you progress through the configuration.
0 коммент.:
Отправить комментарий
Ссылки на это сообщение
Создать ссылку