Bridge configured in network node - But ip not allocate in VMs

asked 2014-02-20 09:59:49 -0500

Server Support gravatar image

updated 2014-02-20 23:03:10 -0500

Hi,

Im trying to setup an openstack multinode installation for a private cloud setup in centos platform. I'm struggling to setting up the neutron in the server. I've completed the installation of the service in the network node,compute and controller by following the havana doc of openstack.

There are 5 servers and installed each service in separate servers ie.,
1st server: Controller, glance, keystone, dashboard
2nd Server: Nova compute
3rd " : Block storage (Cinder)
4th " : Object (Swift)
5th " : Network (neutron)

We have 3 NIC in the network server, we have tried to configure the network part on this server. Now we have configured a bridge br-eth2 with the interface eth2 ip as the external interface. Now we can see the eth2 ip on the bridge br-eth2. Here is our current ifconfig in the network node:

br-eth2 Link encap:Ethernet HWaddr 00:XX:XX:XX:XX:XX
inet addr:31.x.y.z Bcast:31.x.y.31 Mask:255.255.255.224
inet6 addr: fe80::xxxx.xxx.xxx.xxxx/64 Scope:Link
.....
...

br-int Link encap:Ethernet HWaddr 86:xx:xx:xx:xx
inet6 addr: fe80:xxxx.xxxx.xxxx.xxxx/64 Scope:Link
....
...

br-tun Link encap:Ethernet HWaddr 76:xx.xx.xx.xx
inet6 addr: fe80:xxxx.xxxx.xxxx.xxxx/64 Scope:Link
....
..

eth0 Link encap:Ethernet HWaddr 00:xx.xx.xx.xx.xx
inet addr:31.x.y.z1 Bcast:31.x.y.31 Mask:255.255.255.224
.....
..

inet6 addr: fe80::xxx.xxx.xxx.xxx/64 Scope:Link
eth1 Link encap:Ethernet HWaddr 00:xx.xx.xx.xx.xx
inet addr:10.x.y.z Bcast:10.x.y.255 Mask:255.255.255.0
......
..

eth2 Link encap:Ethernet HWaddr 00:xx.xx.xxx..xxx
inet6 addr: xxx.xxx.xxx.xxxx/64 Scope:Link

int-br-eth2 Link encap:Ethernet HWaddr 16:xx.xx.xx.xx
inet6 addr: fe80::xxxx.xxx.xxxx.xxxx/64 Scope:Link

My doubt is that according to the docs it says when you port a bridge to a device, then the device ip will not ping. Is that true? First time when i did i too faced this issue.
Then i some how managed the bridge ip to ping and get it displayed under the bridge device. Is this should be done like this way ?

Please could advise me whether i'm going wrong!!! :(

After this I installed the openvswitch plugin and enabled the GRE Tunneling as the network type. Then followed the step: creating base neutron network and configured a external network, internal network and integrated this two networks with a router too.

Is that all to do, now is it ready to launch an instance ?

I tried one instance to launch but unfortunately it will not get an ip and the launch fails. :( In the subnet created we can see floating ip's created but the status is down.

I hope you have a got an idea about the issue im facing. Please kindly go through advice me whether I'm missing ...




















































(more)

edit retag flag offensive close merge delete