sagar.shedge92's profile - activity

2015-03-16 02:52:22 -0500 received badge  Famous Question (source)
2015-01-15 01:14:28 -0500 received badge  Enthusiast
2015-01-12 01:38:03 -0500 commented answer Problems with neutron-rootwrap

It works for me too !!!

2015-01-08 08:55:57 -0500 commented question What is neutron network architecure on havana

I tried with changing mechanism_drivers = openvswitch, but it won't work.

2015-01-08 08:02:41 -0500 received badge  Notable Question (source)
2015-01-08 06:14:12 -0500 commented question What is neutron network architecure on havana

@Ranjit please go through details provided in answer section.

2015-01-08 01:22:49 -0500 answered a question What is neutron network architecure on havana

@Ranjit,

I have not copied commented lines. Do you want anything else then please let me know. Network Node :-

root@network-havana:~# ovs-vsctl show

461dd836-4a99-4dad-bdff-d74ee44e9827

Bridge br-ex
    Port "qg-b85efbb4-42"
        Interface "qg-b85efbb4-42"
            type: internal
    Port br-ex
        Interface br-ex
            type: internal
    Port "eth3"
        Interface "eth3"
Bridge br-tun
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-192.161.1.101"
        Interface "gre-192.161.1.101"
            type: gre
            options: {in_key=flow, local_ip="192.161.1.102", out_key=flow, remote_ip="192.161.1.101"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-int
    Port "tapbf007169-d6"
        tag: 1
        Interface "tapbf007169-d6"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port br-int
        Interface br-int
            type: internal
    Port "qr-ff786370-00"
        Interface "qr-ff786370-00"
            type: internal
ovs_version: "1.10.2"

Neutron.conf [DEFAULT]

#Address to bind the API server # bind_host = 0.0.0.0

# Port the bind the API server to # bind_port = 9696

#core_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

# Advanced service modules service_plugins=neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

# Paste configuration file api_paste_config = /etc/neutron/api-paste.ini

auth_strategy = keystone

allow_overlapping_ips = True

rabbit_host = controller rabbit_userid = guest rabbit_password = XXXX

[quotas]

[agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = XXXX signing_dir = $state_path/keystone-signing

[database] connection = mysql://neutron:netscout@controller/neutron

[service_providers] service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default`

ML2_conf.ini

[ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch,linuxbridge

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre] tunnel_id_ranges = 1:1000

[ml2_type_vxlan]

[database]

[ovs] enable_tunneling = True local_ip = 192.161.1.102 tunnel_type = gre

[agent] tunnel_types = gre root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

Compute Node :-

$> ovs-vsctl show

f10bb1a9-60c7-43c1-b81c-da5d9f3a35ce

Bridge br-int
    Port "tapf4f24b94-b9"
        tag: 1
        Interface "tapf4f24b94-b9"
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-192.161.1.102"
        Interface "gre-192.161.1.102"
            type: gre
            options: {in_key=flow, local_ip="192.161.1.101", out_key=flow, remote_ip="192.161.1.102"}
ovs_version: "1.10.2"

$> brctl show

bridge name bridge id STP enabled interfaces

virbr0 8000.000000000000 yes

Neutron.conf

[DEFAULT] state_path = /var/lib/neutron

lock_path = $state_path/lock

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

api_paste_config = /etc/neutron/api-paste.ini

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = controller

rabbit_port = 5672

rabbit_password = XXXX

[quotas]

[agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken] auth_host = controller

admin_tenant_name = service

admin_user = neutron

admin_password = XXXX

auth_url = http://controller:35357/v2.0

auth_strategy = keystone

auth_port = 35357

auth_protocol = http

[database] connection = mysql://neutron:XXXX@controller/neutron

[service_providers]

service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

2015-01-07 12:00:26 -0500 received badge  Popular Question (source)
2015-01-07 08:28:51 -0500 commented question What is neutron network architecure on havana

Hi Ranjit, But on my machine there is only tapf4f24b94-b9 attached to br-int. On compute node, brctl show gives entry for virbr0 only. On network node there is no brctl installed.

2015-01-07 05:25:36 -0500 asked a question What is neutron network architecure on havana

I have installed havana on ubuntu 12.04 LTS. Details are as follows :- Total nodes 3 -> controller, compute, network.

Total NICs -> Controller 2 NIC's. 1) To connect to external network (172.16.15.95). Dashboard connect using this. 2) Management network (192.168.1.100). To connect to other nodes. Compute 3 NIC's. 1)To connect to external network. (172.16.15.96). 2) Management network (192.168.1.101). 3) Data network (192.161.1.101). Compute 3 NIC's. 1)To connect to external network. (172.16.15.97). 2) Management network (192.168.1.102). 3) Data network (192.161.1.103). 4) External network (172.16.15.97). Use separate NIC to communicate VM's and external machines.

Tried with both plugins OVS and ml2.

Following link shows network architecture on compute node :- http://docs.openstack.org/admin-guide-cloud/content/under_the_hood_openvswitch.html#under_the_hood_openvswitch_scenario1_compute (http://docs.openstack.org/admin-guide...)

Here it shows that a TAP device, such as vnet0 of virtual machine is connected to Linux bridge qbrXXX. veth pair qvbXXX is connected to same Linux bridge and other pair qvoXXX is connected to OVS bridge br-int.

But in my setup, Openstack creates a tap device say tap126454 of virtual machine and it is directly connected to OVS bridge br-int.

I have another setup of Icehouse, it has linux bridge e.g. same as architecture shown in link.

My networking works fine but I want to know is that expected behavior on Havana? Or whether I have to make some changes to get that architecture ?

Because I want to develop a tool and implementation will change according to this architecture. I am not sure what will be end user architecture.