J
J
jidckii2015-11-20 14:38:37
Cisco
jidckii, 2015-11-20 14:38:37

Etherchannel between switches does not increase speed?

Hello.
I just can’t figure out how to properly aggregate channels in order to increase throughput.
such a scheme.
Freenas iscsi tsrget <=(Po1) 5gb LACP => sw02(C2960G) <=(Po2) 5gb LACP (Po5)=> sw01 (C2960S) <= 4gb LACP (Po1)=> Linux iscsi initiator.
sw02 config:

SRV-GN02#sh etherchannel summary 
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port


Number of channel-groups in use: 2
Number of aggregators:           2

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
1      Po1(SU)         LACP      Gi0/35(P)   Gi0/36(P)   Gi0/37(P)   
                                 Gi0/38(P)   Gi0/39(P)   
2      Po2(SU)         LACP      Gi0/40(P)   Gi0/41(P)   Gi0/42(D)   
                                 Gi0/43(P)   Gi0/44(P)   

SRV-GN02#sh etherchannel load-balance 
EtherChannel Load-Balancing Configuration:
        src-dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
  IPv4: Source XOR Destination MAC address
  IPv6: Source XOR Destination MAC address

sw01config:
SRV-GN01#sh etherchannel summary 
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port


Number of channel-groups in use: 5
Number of aggregators:           5

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
1      Po1(SU)         LACP      Gi1/0/9(P)  Gi1/0/10(P) Gi1/0/11(P) 
                                 Gi1/0/12(P) 
5      Po5(SU)         LACP      Gi1/0/44(P) Gi1/0/45(P) Gi1/0/46(D) 
                                 Gi1/0/47(P) Gi1/0/48(P) 

SRV-GN01#sh etherchannel lo      
SRV-GN01#sh etherchannel load-balance 
EtherChannel Load-Balancing Configuration:
        src-dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
  IPv4: Source XOR Destination MAC address
  IPv6: Source XOR Destination MAC address

Actually, the problem is that with Freenas I start pouring into 8 UDP streams iperf at 500m
$ iperf -c 172.20.0.36 -u -b 500m -i 1 -t 100
At the same time, on the switches I see this picture
On sw02:
Gi0/35    freenas       connected    3          a-full a-1000 10/100/1000BaseTX
Gi0/36    freenas       connected    3          a-full a-1000 10/100/1000BaseTX
Gi0/37    freenas       connected    3          a-full a-1000 10/100/1000BaseTX
Gi0/38    freenas       connected    3          a-full a-1000 10/100/1000BaseTX
Gi0/39    freenas       connected    3          a-full a-1000 10/100/1000BaseTX
Gi0/40    SRV-GN01           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi0/41    SRV-GN01           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi0/42    SRV-GN01           notconnect   1            auto   auto 10/100/1000BaseTX
Gi0/43    SRV-GN01           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi0/44    SRV-GN01           connected    trunk      a-full a-1000 10/100/1000BaseTX

Gi0/35   	   0			0
Gi0/36   	   54			0
Gi0/37   	   100		0
Gi0/38   	   100		0
Gi0/39   	   54			0

Gi0/40   	   0			0
Gi0/41   	   0			0
Gi0/42   	   0			0
Gi0/43   	   0			100
Gi0/44   	   0			0

On sw01:
Gi1/0/9   IBM-server_bond    connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/10  IBM-server_bond    connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/11  IBM-server_bond    connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/12  IBM-server_bond    connected    trunk      a-full a-1000 10/100/1000BaseTX

Gi1/0/44  SRV-GN02           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/45  SRV-GN02           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/46  SRV-GN02           notconnect   1            auto   auto 10/100/1000BaseTX
Gi1/0/47  SRV-GN02           connected    trunk      a-full a-1000 10/100/1000BaseTX
Gi1/0/48  SRV-GN02           connected    trunk      a-full a-1000 10/100/1000BaseTX


Gi1/0/9   	           0			0
Gi1/0/10   	   0			0
Gi1/0/11   	   0			100
Gi1/0/12   	   0			0

Gi1/0/44   	   0			0
Gi1/0/45   	   0			0
Gi1/0/46   	   0			0
Gi1/0/47   	   100		0
Gi1/0/48   	   0			0

That is, the load is not smeared already between the switches.
I tried to set different load-balance
modes, if I set src-ip or src-mac , then the switches start behaving very strangely and push traffic to all active ports, and everything is loaded into the shelf.
I assume that the problem may be that the load goes from one ip to one ip and the load is not distributed over this.
How to configure correctly so that it would be possible to drive 4G from 1 server to another in both directions, provided that they are on the same network and on different switches?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
M
Mystray, 2015-11-20
@Mystray

between two hosts - almost unrealistic on inexpensive hardware. If the switch can hash over L3 + L4, then all right, in several streams (with different ports), it may be possible to get more than one link. But this is only a chance that different hashes will be caught.
If the switches do not know how to balance on L4, then between two hosts you will not make more than 1 link speed in any way, simply because the choice of the link will be determined solely by IP addresses.

C
Cool Admin, 2015-11-20
@ifaustrue

The fact is that Etherchannel in any case can rely on balancing either IP, or MAC, or MAC + IP combinations, in your case they are not treason, because this technology will not work.
You can try to bind new addresses to iSCSI hosts and initiate several sessions from different IPs.

V
Valentine, 2015-11-20
@vvpoloskin

The starting point for googling is "Cisco lacp hashing algorithm". As far as I know, hashing on L4 from a tsiska is supported only on nexuses. Even on CRS cabinets, you will be unpleasantly surprised when one link in the aggregate overflows when building a fat x-connect via mpls. Such is life (
This is the advantage of junipers - their EX-switches have this support right away.
And yes, look better at balancing according to MRTG charts. The output in the statistics console is currently not informative and may be erroneous.

T
throughtheether, 2015-11-20
@throughtheether

I assume that the problem may be that the load goes from one ip to one ip and the load is not distributed over this.
I support your suggestion.
EtherChannel Load-Balancing Configuration:
src-dst-ip

And if I can accommodate them on 1 switch, will aggregation work?
In this case, if I were you, I would try to disable aggregation on the switch and experiment with the bonding settings on the server (balance-alb mode for Linux OS). But you should be prepared for the fact that iSCSI performance may drop due to out-of-order IP packets.

M
mikes, 2015-11-20
@mikes

bonding does not make the "pipe" bigger, it makes it wider, respectively, if you have only 2 hosts for traffic exchange, then you will not squeeze more physical speed of one port if you do not practice traffic distribution algorithms, go to 10gb cards, or fibrechannel :)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question