Ask Your Question
0

Can not ssh into instance on second compute node

asked 2014-05-17 07:20:53 -0600

hoott gravatar image

I started by creating a one node/system cloud using packstack. Upon completion and adjusting the setup, i was able to launch a Fedora instance and log into that instance from anywhere on my public network after assigning x.x.x.252 as the floating IP. I then modified packstack per RDO instructions and re-ran to add a second compute node. I was then able to launch an instance on the second compute node but I have been unsuccessful in ssh'ng into the instance after moving the x.x.x.252 address over to the new instance using the Dashboard. The Dashboard reported that the floating IP had been successfully associated to the instance on the second compute node. I have since brought up an external bridge (br-ex) on the second compute node to the public network but that didn't change anything. So far I have not found any relevant error messages.

edit retag flag offensive close merge delete

Comments

After packstack answer-file modification for compute node setup what the value of
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE ?

dbaxps gravatar imagedbaxps ( 2014-05-17 08:17:03 -0600 )edit

What was the value of :-
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

dbaxps gravatar imagedbaxps ( 2014-05-17 08:20:44 -0600 )edit

2 answers

Sort by » oldest newest most voted
1

answered 2014-05-17 14:03:10 -0600

dbaxps gravatar image

updated 2014-05-20 05:14:54 -0600

Your values cannot support "vlan"

NEUTRON_OVS_TENANT_NETWORK_TYPE=local  <= should not be local for sure , vlan on Havana
CONFIG_NEUTRON_OVS_VLAN_RANGES= 
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= 
CONFIG_NEUTRON_OVS_BRIDGE_IFACES= 

I still cannot get success on Two Node Controller+Compute IceHouse Neutron OVS&VLAN Setup ,
but Havana for similar Setup required

# Type of network to allocate for tenant networks
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

# A comma separated list of VLAN ranges for the Neutron openvswitch
# plugin
CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:10:20

# A comma separated list of bridge mappings for the Neutron
# openvswitch plugin
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

# A comma separated list of colon-separated OVS bridge:interface
# pairs. The interface will be added to the associated bridge.
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

I finallly succeded with  Two Node Controller+Compute IceHouse Neutron OVS&VLAN Setup ,
Correction of neutron.conf  fixed problem. Cirros instance available via  floating IP is running on Compute
Node. I can ssh into instance and work with Internet

View this notes as a Proof Of Concept
You should prepare 2 clean CentOS 6.5 boxes , each one having at least 2 Ethernet NIC, if you have to have management and public network separated, then Controller+Neutron Server should have 3 NICS . You switch both CentOS 6.5 hosts to service "network" from NetworkManager. See for general design Link 1. Then on Host supposed to be controller you run packstack --answer-file=./TwoNode-answer.txt ( replacing my IPs by yours) and build both Controller and Compute at a single packstack run (as root) . Then apply fix for Neutron Server.

Already fixed in openstack-neutron-2014.1-16
edit flag offensive delete link more

Comments

I'm running Icehouse, does that change any of your answers?

hoott gravatar imagehoott ( 2014-05-17 14:39:37 -0600 )edit

No due to NEUTRON_OVS_TENANT_NETWORK_TYPE=local cannot be good for vlan

dbaxps gravatar imagedbaxps ( 2014-05-17 14:47:41 -0600 )edit

Following bellow is fragment from /var/log/neutron/server.log

2014-05-17 22:07:52.083 5891 INFO neutron.common.config [-] Logging enabled!
2014-05-17 22:07:52.092 5891 INFO neutron.common.config [-] Config paste file: /usr/share/neutron/api-paste.ini
2014-05-17 22:07:52.180 5891 INFO neutron.manager [-] Loading core plugin: neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
2014-05-17 22:07:52.338 5891 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode.
2014-05-17 22:07:52.431 5891 INFO neutron.plugins.openvswitch.ovs_neutron_plugin [-] Network VLAN ranges: {'physnet1': [(10, 20)]}
2014-05-17 22:07:52.554 5891 INFO neutron.openstack.common.rpc.common [-] Connected to AMQP server on 192.168.122.127:5672
2014-05-17 22:07:52.562 5891 INFO neutron.manager [-] Service L3_ROUTER_NAT is supported by the core plugin ...
(more)
dbaxps gravatar imagedbaxps ( 2014-05-17 14:55:13 -0600 )edit

2014-05-17 22:07:52.793 5891 ERROR neutron.services.provider_configuration [-] Driver neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver is not unique across providers
It means either my answer-file is missing certain options or it is a bug.

dbaxps gravatar imagedbaxps ( 2014-05-17 14:59:09 -0600 )edit

Error above fixed updating neutron.conf. Line :
service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin
was removed

dbaxps gravatar imagedbaxps ( 2014-05-17 18:05:56 -0600 )edit
0

answered 2014-05-17 13:40:57 -0600

hoott gravatar image

If you mean in the packstack answer file that I used, all blank and perhaps the issue……

[root@ostack01 ~(keystone_admin)]# cat packstack-answers-master-051414.txt | grep CONFIG_NEUTRON_OVS CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local CONFIG_NEUTRON_OVS_VLAN_RANGES= CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= CONFIG_NEUTRON_OVS_BRIDGE_IFACES= CONFIG_NEUTRON_OVS_TUNNEL_RANGES= CONFIG_NEUTRON_OVS_TUNNEL_IF= CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-05-17 07:20:53 -0600

Seen: 404 times

Last updated: May 20 '14