Ask Your Question
0

havana neutron multi node OVS flat network w/ Noop FW help!

asked 2014-05-21 18:15:40 -0500

cyry gravatar image

updated 2014-05-21 18:18:14 -0500

all nodes are ubuntu 12.04
havana with openvswitch plugin
controller and network node on one machine
compute node 1 and 2 are on two other machines.

eth0 on all machines is for management
br-ex and eth1 are used for VMs to bridge out to physical network.

-I can deploy VM's, VMs can talk to each other (if they are on the same network and compute node only!)
-VM's cannot talk to each other if they are on different compute nodes
-VM's cannot talk out to the external network.
-VM's cannot ping the compute node or controller/network node
-I don't care about DHCP right now, and I am not using a virtual router in neutron. I just have a network, with VM's attached (which I manually IP)
-I am using the NOOP firewall driver.

Questions:
1)One thing that the docs are not clear on is whether or not I can use flat networking with ovs in neutron without a firewall or not? using

[ovs]
tenant_network_type = **none OR local** ??? tried both.. no luck
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex

2) If I try to ping out from my instance I can tcpdump and see the ARP request on the compute node and see the ARP in the tap interface, the qbr interface, and the br-ex and eth1 interface on the compute node. I do not see an arp reply. I think the firewall is off as I am using noon and also turned off iptables etc..

Compute node setup: /etc/network/interfaces

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
    address 192.168.0.72
    netmask 255.255.255.0
    network 192.168.0.0
    broadcast 192.168.0.255
    gateway 192.168.0.1

auto eth1
iface eth1 inet manual
        up ifconfig eth1 promisc up
        down ifconfig eth1 promisc down
iface br-ex inet static
address 192.168.1.72
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        gateway 192.168.1.1

ovs-vsctl show

125c8a85-c824-450f-8b09-83a3f6ac7942
    Bridge br-ex
        Port "eth1"
            Interface "eth1"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Port "tap53eab449-39"
            Interface "tap53eab449-39"
        Port "qvob54350a6-c1"
            Interface "qvob54350a6-c1"
        Port "tap804de977-15"
            Interface "tap804de977-15"
        Port br-int
            Interface br-int
                type: internal
        Port "qvo804de977-15"
            Interface "qvo804de977-15"
        Port "tapf62ee70c-1c"
            Interface "tapf62ee70c-1c"
        Port "tap52b963f1-c6"
            Interface "tap52b963f1-c6"
        Port "tap96abf3fd-ae"
            Interface "tap96abf3fd-ae"
        Port "tape0d13053-38"
            Interface "tape0d13053-38"
        Port "tap383cc618-de"
            Interface "tap383cc618-de"
        Port "tap755ebeb8-0c"
            Interface "tap755ebeb8-0c"
        Port "tap624034a0-53"
            Interface "tap624034a0-53"
    ovs_version: "1.10.2"

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[agent]
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[ovs]
tenant_network_type = none
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex
integration_bridge = br-int

/etc/neutron/neutron.conf

[DEFAULT]
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
notification_driver = neutron.openstack.common.notifier.rpc_notifier
[quotas]
[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = controller
admin_tenant_name = service
admin_user = neutron
admin_password = password
auth_url = http://controller:35357/v2.0
auth_strategy = keystone
rpc_backend = neutron.openstack.common ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-05-22 00:12:05 -0500

prathamesh_deshpande gravatar image

Please try troubleshooting with this: http://openstack.redhat.com/Networkin...

edit flag offensive delete link more

Comments

Thanks but I am not using GRE or VLANs. I am using FLAT networking with neutron. See ovs_neutron_plugin.ini above. I know it is possible and have done it with NOVA flat networking, but I want to use neutron.

cyry gravatar imagecyry ( 2014-05-22 02:05:24 -0500 )edit

I went ahead a installed devstack (havana) single node which had a neutron localrc config file. It is setup for bridging and does work. It uses the ML2 plugin however.

The question then is can I do the same with the openvswitch plugin?

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
instead of
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin ?

cyry gravatar imagecyry ( 2014-05-22 14:02:04 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-05-21 18:15:40 -0500

Seen: 567 times

Last updated: May 22 '14