Ask Your Question
0

Adding new compute nodes

asked 2016-06-19 20:19:36 -0500

Lee gravatar image

Hi All,

I already deployed the openstack on servers (say controller0 and compute0).

I want to add compute node (say compute1) to run more instances.

When adding more compute node (compute1) it is the same compute node (compute0) configuration.

I change IP address of compute nods (compute0 and compute1).

I want to verify my compute node, I am not sure it works that way.

How to verify the compute nodes?

Thanks!

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2016-06-19 21:16:10 -0500

updated 2016-06-20 06:05:21 -0500

nova and neutron ip configuration change too.

you check nova.conf my_ip section.

and ml2_conf.ini or openvswitch_agent.ini file check.

Then compute1 service restart. (nova and neutron)

you can check all agent on controller node. (nova service-list / neutron agent-list)

you can find compute1 state.

compute1 configuration.

  • nova.conf

First copy nova.conf in compute0.

Second change my_ip section. (my_ip = 192.168.0.70)

  • ml2_conf.ini

First copy ml2_conf.ini in compute0.

Second change local_ip section. (local_ip = {TUNNEL IP, 192.168.15.70})

edit flag offensive delete link more

Comments

Thanks. Your suggestion worked for me. I already change your suggestion. The compute nodes is normal for the state. I do not sure how to make an instance according to the compute nodes. Do you have any idea? Thanks!

Lee gravatar imageLee ( 2016-06-20 00:22:20 -0500 )edit

Oh... you want create instance on choose compute node?

if you create instance to specific compute node, you using nova CLI...

for example $ nova boot --image <uuid> --flavor m1.tiny --key_name test --availability_zone=nova:hostname

if you using horizon you can check instance count (over 2)...

uni2u gravatar imageuni2u ( 2016-06-20 02:24:15 -0500 )edit

Thanks. The instances are created on original compute node (compute0). However, the additional compute node (compute1) was not created for making instance. When i show the dashboard (horizon) for the state of new instance, it is still scheduling. I think that it is network problems.

Lee gravatar imageLee ( 2016-06-20 03:04:23 -0500 )edit

I deleted the configure rabbitMQ in neutron.conf on additional compute node (compute1).

I can be changed into the following:

//rpc_backend = rabbit

rabbit_host = controller

rabbit_password = RABBIT_PASS

If i do not delete the rpc_backend, neutron-plugin-openvswitch-agent is still wating.

Lee gravatar imageLee ( 2016-06-20 03:10:29 -0500 )edit

The ip of the nodes are as follows:

*one compute node (compute0):

my_ip: 192.168.0.30

local_ip (INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS) in ml2_conf.ini: 192.168.15.30


*another compute node (compute1):

my_ip: 192.168.0.70

local_ip in ml2_conf.ini:

Lee gravatar imageLee ( 2016-06-20 03:22:51 -0500 )edit

*controller node (controller):

my_ip: 192.168.0.10

local_ip in ml2_conf.ini: 192.168.15.10

Lee gravatar imageLee ( 2016-06-20 03:24:44 -0500 )edit

compute1 configuration.

  • nova.conf

First copy nova.conf in compute0.

Second change my_ip section. (my_ip = 192.168.0.70)

don't comment out rpc_backend=rabbit section.

  • ml2_conf.ini

First copy ml2_conf.ini in compute0.

Second change local_ip section. (local_ip = {TUNNEL IP, 192.168.15.70})

uni2u gravatar imageuni2u ( 2016-06-20 06:07:20 -0500 )edit

I reflected your answer. However, the compute node (compute1) is not working. The neutron-plugin-openvswitch-agent is still wating. I try to think about more ways and i will refer your blog. I am korean^^.

Thanks!

Lee gravatar imageLee ( 2016-06-20 23:58:20 -0500 )edit

^^

Well did you restart openvswitch agent on compute1?

if you check compute1 configuration.

$ sudo ovs-vsctl del-br br-int

$ sudo ovs-vsctl del-br br-tun

Then reboot compute1.

compute0 and compute1 configuration point is same.

just different thing is ip address. (management ip / TUNNEL IP)

uni2u gravatar imageuni2u ( 2016-06-21 00:58:32 -0500 )edit

Thanks! This is the log on my compute node openvswitch-agent.log: WARNING neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Unable to create tunnel port. Invalid remote IP: 192.168.15.20, 192.168.15.70. So I update local_ip in ml2_conf.ini file, and then restart the agent service.

Lee gravatar imageLee ( 2016-06-27 00:28:34 -0500 )edit

However, the error still exist. Have you ever experienced this problem?

Lee gravatar imageLee ( 2016-06-27 00:28:48 -0500 )edit

First you checkd network interface. $ sudo vi /etc/network/interfaces (you check eth0/eth1/eth2)

controller/compute0/compute1 nodes are connected using tunnel. (192.168.15.xx subnet ranges)

Second check ovs interfaces. $ sudo ovs-vsctl show

did you see br-int and br-tun?

uni2u gravatar imageuni2u ( 2016-06-27 19:00:54 -0500 )edit

Yes. The IP of nodes (controller, compute0, compute1) are composed of same subnet (192.168.0.xx). Also, the tunnel IP of nodes are connected using same subnet (192.168.15.xx). Network are operating properly (ping 192.168.15.xx or 192.168.0.xx).

Lee gravatar imageLee ( 2016-06-27 19:31:11 -0500 )edit

I checked all the node (controller, compute0, compute1) using ovs-vsctl show. I saw both br-int and br-tun.

Lee gravatar imageLee ( 2016-06-27 19:34:49 -0500 )edit

I think you checked all nodes configuration.

You can check last things.

It's database.

You check neutron db - ml2_gre or ml2_vxlan tables.

You can find tunnel_ip.

Then delete tunnel data in table.

But this way is high risk.

https://ask.openstack.org/en/question...

uni2u gravatar imageuni2u ( 2016-06-27 23:44:43 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-06-19 20:19:36 -0500

Seen: 557 times

Last updated: Jun 20 '16