Ask Your Question
0

neutron - not able to trunk vlan tags in to Virtual Machines [closed]

asked 2014-01-08 03:19:03 -0500

thota gravatar image

updated 2014-01-08 03:29:47 -0500

I'm trying to trunk vlan traffic in to the virtual machine, but failed to do so. Here is my requirement - An openstack setup with a controller and a compute node. compute node shall launch eight virtual machines using nova and each virtual machine shall run an application (telecom application). This application running on the virtual machine handles different types of traffic (for ex, management and data traffics). These traffics are differentiated with VLAN IDs. The compute node has a 10Gbps NIC (This interface is named as eth11 on my machine) and this interface receives the tagged data from the external switch.

OS on compute node is RHEL-6.4, so I followed the instructions as per document. Installed neutron on controller and compute nodes. I did not install "Network Node" as the virtual machines need not talk to external world (connect to internet).

Here is my configuration:

Controller Node:

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

tenant_network_type = vlan network_vlan_ranges = physnet1:100:300

Compute Node:

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

tenant_network_type = vlan network_vlan_ranges = physnet1:100:300 bridge_mappings = physnet1:br-eth11

Created a network

[root@controller ~]# neutron net-list +--------------------------------------+--------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+--------+------------------------------------------------------+ | 46d60b62-202e-460c-949d-b710db032ce0 | fabnet | de5a6015-2a10-4b3a-ae78-290ff621ffaa 192.168.12.0/24 | +--------------------------------------+--------+------------------------------------------------------+ [root@controller ~]# [root@controller ~]# neutron net-show 46d60b62-202e-460c-949d-b710db032ce0 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 46d60b62-202e-460c-949d-b710db032ce0 | | name | fabnet | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 100 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | de5a6015-2a10-4b3a-ae78-290ff621ffaa | | tenant_id | 4b41b2c1ee3e42fcb928fadcf967bb1a | +---------------------------+--------------------------------------+

launched a VM:

[root@controller ~]# nova list +--------------------------------------+-------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+---------------------+ | 8368dd7d-27b1-4721-af90-fb7d3c1eb13f | rhel1 | ACTIVE | None | Running | fabnet=192.168.12.3 | +--------------------------------------+-------+--------+------------+-------------+---------------------+

"ovs-vsctl show" ouput on compute node

[root@compute1 ~]# ovs-vsctl show 0bfed3e4-5f13-4726-a6a5-073fd47f1500 Bridge br-int Port "int-br-eth11" Interface "int-br-eth11" Port "tap56765470-5a" Interface "tap56765470-5a" Port br-int Interface br-int type: internal Bridge "br-eth11" Port "phy-br-eth11" Interface "phy-br-eth11" Port "br-eth11" Interface "br-eth11" type: internal Port "eth11" Interface "eth11" ovs_version: "1.11.0"

Assigned IP address to the br-eth11 interface and could ping to the virtual machine. Till here everything worked well.

Then, I added vlan subinterface to eth11 using vconfig vconfig add eth11 100 ifconfig eth11.100 192.168.100.24 up

Similarly added vlan subinterface to eth0 on virtual machine as below vconfig add eth0 100 ifconfig eth0.100.192.168.100.2 up

Now, when I try to ping from my compute node to 10.130.100.2 (eth0.100 on VM) I see " Destination Host Unreachable"

In the process I cleared tag for the port tap56765470-5a. This port was set to tag=2 by openstack and also deleted vlan related flows created by openstack. Below were the flows created by openstack

[root@compute1 ~]# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6496.181s, table=0, n_packets=88, n_bytes=3936, idle_age=1, priority=3,in_port=21,dl_vlan=100 actions=mod_vlan_vid:2,NORMAL cookie=0x0, duration=8298.765s, table=0, n_packets=177, n_bytes=19646, idle_age=176, priority=0 actions=NORMAL

[root@compute1 ~]# ovs-ofctl dump-flows br-eth11 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6628.074s, table=0, n_packets=41, n_bytes=7910, idle_age=5751, priority=4,in_port=2 ... (more)

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by thota
close date 2014-02-07 06:28:49.385341

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-01-10 00:13:36 -0500

thota gravatar image

I could fix the problem - looks like the issue is with the kernel. When I enable vlan splinters everything worked well.

ovs-vsctl set interface eth11 other-config:enable-vlan-splinters=true
edit flag offensive delete link more

Comments

Hi Thota,

I am trying to achive the same on my 3 node (Controler, neutron and compute) icehouse setup without much success. Can you please share your neutorn configurations so that I can have a look at it.

Thanks RV

rvarghese gravatar imagervarghese ( 2014-10-15 02:07:52 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-01-08 03:19:03 -0500

Seen: 1,748 times

Last updated: Jan 10 '14