openvswitch and cisco 2960 bonding with LACP in openstack Havana wrong behavior

asked 2014-08-13 01:51:35 -0600

kejunyang gravatar image

updated 2014-08-13 09:36:54 -0600

SamYaple gravatar image

openvswitch and cisco 2960 bonding with LACP in openstack Havana wrong behavior

We setup several nodes with OpenStack Havanna, ports eth1/2 setup as bonding by openvswitch, however we observed wrong beharior as below: 1. some of VM,when booting, is not able to receive IP address from DHCP; 2. in some hosts, one port showing as Active, while another showing as disabled, they are in same bonding,

Could any one see same problem and how is is solved? many thanks,


openvswitch bonding configure

[root@Compute03 ~]ovs-vsctl add-bond0 br-phy eth1 eth2

[root@Compute03 ~]ovs-vsctl set port bond0 bond_mod=balance-tcp lacp=active

cisco 2960 configure:

interface GigabitEthernet0/29 - 30

switchport mode trunk

channel-group 4 mode active

interface Port-channel4

switchport mode trunk

[root@Compute03 ~]# ovs-appctl bond/show bond0

---- bond0 ----

bond_mode: balance-tcp

bond-hash-basis: 0

updelay: 0 ms

downdelay: 0 ms

next rebalance: 2717 ms

lacp_status: negotiated

slave eth1: disabled

        may_enable: false

slave eth2: enabled

        active slave

        may_enable: true

        hash 1: 0 kB load

        hash 4: 0 kB load

[root@Compute03 ~]# ovs-appctl lacp/show

---- bond0 ----

        status: active negotiated

        sys_id: 6c:92:bf:07:37:cc

        sys_priority: 65534

        aggregation key: 3

        lacp_time: slow

slave: eth1: defaulted detached

        port_id: 2

        port_priority: 65535

        may_enable: false

        actor sys_id: 6c:92:bf:07:37:cc

        actor sys_priority: 65534

        actor port_id: 2

        actor port_priority: 65535

        actor key: 3

        actor state: activity aggregation defaulted

        partner sys_id: 00:00:00:00:00:00

        partner sys_priority: 0

        partner port_id: 0

        partner port_priority: 0

        partner key: 0

        partner state:

slave: eth2: current attached
        port_id: 3
        port_priority: 65535

        may_enable: true

        actor sys_id: 6c:92:bf:07:37:cc

        actor sys_priority: 65534

        actor port_id: 3

        actor port_priority: 65535

[root@Compute03 ~]# cat /proc/net/bonding/bond0 

Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation

Transmit Hash Policy: layer2 (0)

MII Status: up

MII Polling Interval (ms): 100

Up Delay (ms): 0

Down Delay (ms): 0

802.3ad info

LACP rate: slow

Aggregator selection policy (ad_select): stable

Active Aggregator Info:

        Aggregator ID: 1

        Number of ports: 1

        Actor Key: 17

        Partner Key: 4

        Partner Mac Address: 0c:85:25:e0:18:00

Slave Interface: eth1

MII Status: up

Speed: 1000 Mbps

Duplex: full

Link Failure Count: 0

Permanent HW addr: 6c:92:bf:07:37:cc

Aggregator ID: 1

Slave queue ID: 0
edit retag flag offensive close merge delete


Moved from chinese to english section. If you would like to ask this in Chinese, please repost the question again. You should get a better response here.

SamYaple gravatar imageSamYaple ( 2014-08-13 09:30:21 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2014-08-13 09:41:22 -0600

SamYaple gravatar image

I would strongly recommend against ovs bonding. We have never had good experince with it. I would recommend using linux bonding module for this as it is known to be stable.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-08-13 01:43:42 -0600

Seen: 3,415 times

Last updated: Aug 13 '14