Ask Your Question
0

Havana Neutron flat dhcp network

asked 2014-03-23 15:22:49 -0500

radclif gravatar image

updated 2014-03-23 15:26:59 -0500

Hi, I'm trying to set up a POC for a private cloud, I have currently openstack running on one node (but a further two to nodes are set aside as compute nodes if I can get openstack working on one node). I can bring up vms and have connectivity between them. However they are not accessible from the outside world. To elaborate the controller/compute/network node has the following network setup (image includes two compute nodes however they are not in use at the moment, sorry for the basic diagram)

|----------------|        eth0 192.168.100.169
|  office net    |                 |-------------|
|----------------| ----------------| gateway     | eth1 192.168.200.1
        |                          | server      |------
        |                          |-------------|     |
        | eth0 192.168.100.229                         |
|----------------|                              |-------------------|
|  controller    |------------------------------| unmanaged         |
|----------------| eth1 192.168.200./24         | switch            | 
                                                |-------------------| ------------|
                                                        |                         |
                                                        | eth1                    | eth1
vm1 192.168.200.2                     |-----------------|                     |-----------------|
dhcp 192.168.200.3                    | compute1        |                     | compute2        |
vm2 192.168.200.4                     |-----------------|                     |-----------------| 
                                        eth0 192.168.100.30                      eth0 192.168.100.31

The vms are all on the controller node and I can ping between them how ever I cannot ping from the controller node to the vm's. I think I am missing something pretty basic. Also i cannot ping from the gateway server (just a basic centos install, with ip forwarding) to the vms I imagine that the networking should behave in the following manor. br-int is used for vms to create interfaces on br-tun is used to connect between hypervisors and br-ext is used to provide connectivity from the outside world into the vm network I assigned eth1 to br-ext as I guessed all vms will need to communicate over the unmanaged switch when I have additional compute nodes. And to get to the gateway server

I really only need one subnet for the testing and I decided to use gre networking to do this as someone suggested it as an easy way to set up a flat provider network. I really don't mind how this is achieved so if vlan is better than please say so.

so configuration wise I went with: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

    [OVS]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.200.254
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

to be honest I didn't know what I should have put for the local_ip option so this could be one of the problems. I guest it just needed an ip endpoint to set up the tunnel. /etc/neutron/dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 5
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = True
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_domain = openstacklocal
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
dnsmasq_lease_max = 16777216

l3_agent.ini

[DEFAULT]
debug = False
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5

/etc/neutron/neutron.conf

[DEFAULT]
debug = False
verbose = False
state_path = /var/lib/neutron
lock_path = $state_path/lock
log_dir = /var/log/neutron
bind_host = 192.168.100.229
bind_port = 9696
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
api_paste_config ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-09-11 12:37:20 -0500

naggappan gravatar image

You need to set NAT using iptables as below,

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

This will allow your VM to communicate with your local network. So that from VM you can ping to your controller node or any machine in your network.

To access your VM from controller node or any other machine in your same network please open the Horizon and open access and security in your project. Click manage Manage rules and set allow for TCP port 22 for ssh and ICMP for your network.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-03-23 15:22:49 -0500

Seen: 392 times

Last updated: Sep 11 '14