Ask Your Question
0

Build devstack and ODL - VLAN setup problem

asked 2015-04-26 12:50:50 -0500

DelightD gravatar image

updated 2015-04-26 13:07:48 -0500

HI, I have one question about VLAN between Openstack and Opendaylight.

I use Devstack to deploy my Openstack environment.(three physical compute: one Openstack controller node , one Compute node , one Opendaylight node)

One question I want to ask , if I use Opendaylight OVSDB to control my OVS so do I need to use "q-agt" and add "Q_AGENT=openvswitch" in local.conf ? this confuse me for a while.

this is my #ovs-vsctl when my devstack stack.sh complete

openstack@openstack:~/devstack$ sudo ovs-vsctl show
e60acaaf-63e8-4eeb-abf6-f5c4b4eb19d9
    Manager "tcp:192.168.5.23:6640"
        is_connected: true
    Bridge br-int
        Controller "tcp:192.168.5.23:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
        Port "tap9a4f626f-60"
            Interface "tap9a4f626f-60"
                type: internal
    Bridge br-ex
        Controller "tcp:192.168.5.23:6633"
            is_connected: true
        Port br-ex
            Interface br-ex
                type: internal
           ovs_version: "2.0.2"

I notice that my ovs didnt build up the Bridge "br-eth2" (my eth2 use for vlan , didnt assign ip on it)

is it correctly??

here is my local.conf:

[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=/opt/stack/logs
LOG_COLOR=False
RECLONE=no
OFFLINE=Yes
VERBOSE=True
SYSLOG=True

disable_service n-cpu
disable_service n-net

enable_service n-novnc
enable_service horizon
enable_service q-svc
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
enable_service rabbit
enable_service odl-compute odl-neutron

HOST_NAME=openstack
HOST_IP=192.168.5.203
HOST_IP_IFACE=eth0
VLAN_INTERFACE=eth2

SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.5.203


PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
OVS_PHYSICAL_BRIDGE=br-eth2
MULTI_HOST=True
OVS_BRIDGE_MAPPINGS=physnet1:br-eth2
ODL_PROVIDER_MAPPINGS=physnet1:eth2


Q_HOST=$SERVICE_HOST
Q_PLUGIN=ml2
ODL_MGR_IP=192.168.5.23
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,opendaylight
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,local
Q_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True


MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=teststack
RABBIT_PASSWORD=teststack
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[ml2_odl]
url=http://192.168.5.23:8080/controller/nb/v2/neutron
username=admin
password=admin

[[post-config|/etc/nova/nova.conf]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
novncproxy_base_url=http://192.168.5.203:6080/vnc_auto.html

and this is mt Compute local.conf:

[[local|localrc]]
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=/opt/stack/logs
LOG_COLOR=False
VERBOSE=True
RECLONE=no
OFFLINE=Yes
SYSLOG=True

disable_all_services
enable_service neutron
enable_service n-cpu
enable_service odl-compute
enable_service n-novnc

HOST_NAME=compute
HOST_IP=192.168.5.83
HOST_IP_IFACE=eth0
SERVICE_HOST_NAME=openstack
SERVICE_HOST=192.168.5.203


#Q_AGENT=openvswitch
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan
Q_HOST=$SERVICE_HOST
ODL_MGR_IP=192.168.5.23

ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlan
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:1000:1010
OVS_PHYSICAL_BRIDGE=br-eth1
OVS_BRIDGE_MAPPING=physnet1:eth1
MULTI_HOST=True

MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=teststack
RABBIT_PASSWORD=teststack
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[ml2_odl]
url=http://192.168.5.23:8080/controller/nb/v2/neutron
username=admin
password=admin

[[post-config|/etc/nova/nova.conf]]
[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vnc_enabled=True
novncproxy_base_url=http://192.168.5.203:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2015-04-27 04:13:43 -0500

shankar gravatar image

I would recommend you to follow this link and configure the same for compute and controller;

 http://networkstatic.net/updated-devstack-opendaylight-vm-image-for-openstack-icehouse/
edit flag offensive delete link more

Comments

I follow this before but my boss told me to use VLAN mode instead of tunnel mode , so I change to use VLAN mode, when I get q-agt out something wrong.On Tunnel mode if I enable q-agt on controller and compute node , they will create patch-tun port Unlimited...i think it is conflict OVSDB :(

DelightD gravatar imageDelightD ( 2015-04-27 04:38:05 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2015-04-26 12:50:50 -0500

Seen: 824 times

Last updated: Apr 27 '15