Ask Your Question
0

What is neutron network architecure on havana

asked 2015-01-07 04:42:35 -0500

sagar.shedge92 gravatar image

I have installed havana on ubuntu 12.04 LTS. Details are as follows :- Total nodes 3 -> controller, compute, network.

Total NICs -> Controller 2 NIC's. 1) To connect to external network (172.16.15.95). Dashboard connect using this. 2) Management network (192.168.1.100). To connect to other nodes. Compute 3 NIC's. 1)To connect to external network. (172.16.15.96). 2) Management network (192.168.1.101). 3) Data network (192.161.1.101). Compute 3 NIC's. 1)To connect to external network. (172.16.15.97). 2) Management network (192.168.1.102). 3) Data network (192.161.1.103). 4) External network (172.16.15.97). Use separate NIC to communicate VM's and external machines.

Tried with both plugins OVS and ml2.

Following link shows network architecture on compute node :- http://docs.openstack.org/admin-guide-cloud/content/under_the_hood_openvswitch.html#under_the_hood_openvswitch_scenario1_compute (http://docs.openstack.org/admin-guide...)

Here it shows that a TAP device, such as vnet0 of virtual machine is connected to Linux bridge qbrXXX. veth pair qvbXXX is connected to same Linux bridge and other pair qvoXXX is connected to OVS bridge br-int.

But in my setup, Openstack creates a tap device say tap126454 of virtual machine and it is directly connected to OVS bridge br-int.

I have another setup of Icehouse, it has linux bridge e.g. same as architecture shown in link.

My networking works fine but I want to know is that expected behavior on Havana? Or whether I have to make some changes to get that architecture ?

Because I want to develop a tool and implementation will change according to this architecture. I am not sure what will be end user architecture.

edit retag flag offensive close merge delete

Comments

But in my setup, Openstack creates a tap device say tap126454 of virtual machine and it is directly connected to OVS bridge br-int.

From my understanding it should not be like that. Create one VM and see what is the output of command : brctl show

Ranjit gravatar imageRanjit ( 2015-01-07 07:44:00 -0500 )edit

In the host for each VM, there will be 4 devices like below.                 qbr3a5bb15a-d5                 qvb3a5bb15a-d5                 qvo3a5bb15a-d5                 tap3a5bb15a-d5

Ranjit gravatar imageRanjit ( 2015-01-07 07:45:11 -0500 )edit

Hi Ranjit, But on my machine there is only tapf4f24b94-b9 attached to br-int. On compute node, brctl show gives entry for virbr0 only. On network node there is no brctl installed.

sagar.shedge92 gravatar imagesagar.shedge92 ( 2015-01-07 08:28:51 -0500 )edit

Sagar, there will be different issue. Paste your neutron related conf file. Also the output of ovs-vsctl show command from network and compute node.

Ranjit gravatar imageRanjit ( 2015-01-07 09:39:17 -0500 )edit

@Ranjit please go through details provided in answer section.

sagar.shedge92 gravatar imagesagar.shedge92 ( 2015-01-08 06:14:12 -0500 )edit

1 answer

Sort by » oldest newest most voted
0

answered 2015-01-08 01:22:49 -0500

sagar.shedge92 gravatar image

@Ranjit,

I have not copied commented lines. Do you want anything else then please let me know. Network Node :-

root@network-havana:~# ovs-vsctl show

461dd836-4a99-4dad-bdff-d74ee44e9827

Bridge br-ex
    Port "qg-b85efbb4-42"
        Interface "qg-b85efbb4-42"
            type: internal
    Port br-ex
        Interface br-ex
            type: internal
    Port "eth3"
        Interface "eth3"
Bridge br-tun
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-192.161.1.101"
        Interface "gre-192.161.1.101"
            type: gre
            options: {in_key=flow, local_ip="192.161.1.102", out_key=flow, remote_ip="192.161.1.101"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-int
    Port "tapbf007169-d6"
        tag: 1
        Interface "tapbf007169-d6"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port br-int
        Interface br-int
            type: internal
    Port "qr-ff786370-00"
        Interface "qr-ff786370-00"
            type: internal
ovs_version: "1.10.2"

Neutron.conf [DEFAULT]

#Address to bind the API server # bind_host = 0.0.0.0

# Port the bind the API server to # bind_port = 9696

#core_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

# Advanced service modules service_plugins=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

# Paste configuration file api_paste_config = /etc/neutron/api-paste.ini

auth_strategy = keystone

allow_overlapping_ips = True

rabbit_host = controller rabbit_userid = guest rabbit_password = XXXX

[quotas]

[agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = XXXX signing_dir = $state_path/keystone-signing

[database] connection = mysql://neutron:netscout@controller/neutron

[service_providers] service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default`

ML2_conf.ini

[ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch,linuxbridge

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre] tunnel_id_ranges = 1:1000

[ml2_type_vxlan]

[database]

[ovs] enable_tunneling = True local_ip = 192.161.1.102 tunnel_type = gre

[agent] tunnel_types = gre root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

Compute Node :-

$> ovs-vsctl show

f10bb1a9-60c7-43c1-b81c-da5d9f3a35ce

Bridge br-int
    Port "tapf4f24b94-b9"
        tag: 1
        Interface "tapf4f24b94-b9"
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-192.161.1.102"
        Interface "gre-192.161.1.102"
            type: gre
            options: {in_key=flow, local_ip="192.161.1.101", out_key=flow, remote_ip="192.161.1.102"}
ovs_version: "1.10.2"

$> brctl show

bridge name bridge id STP enabled interfaces

virbr0 8000.000000000000 yes

Neutron.conf

[DEFAULT] state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

api_paste_config = /etc/neutron/api-paste.ini

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_port = 5672

rabbit_password = XXXX

[quotas]

[agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken] auth_host = controller

admin_tenant_name = service

admin_user = neutron

admin_password = XXXX

auth_url = http://controller:35357/v2.0

auth_strategy = keystone

auth_port = 35357

auth_protocol = http

[database] connection = mysql://neutron:XXXX@controller/neutron

[service_providers]

service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-01-07 04:42:35 -0500

Seen: 204 times

Last updated: Jan 08 '15