Ask Your Question
0

Multi node deploy: instance spawning failure

asked 2011-04-05 08:56:53 -0500

hyunsun-moon gravatar image

Hello,

I have deployed latest nova on 2 host machines, one for controller(nova-api, nova-scheduler, nova-network, nova-objectstore) and the other for compute(nova-compute, nova-scheduler).

Problem is, as I hit run instance from the controller node, following compute error occurs on the compute node. nova-compute, nova-scheduler service was successfully added on the service table and euca-describe-instance show the state of instance spawning fail.

root@cloud02:~/openstack# nova-manage service list cloud02 nova-scheduler enabled :-) 2011-04-05 09:01:01 cloud02 nova-network enabled :-) 2011-04-05 09:01:01 cloud01 nova-scheduler enabled :-) 2011-04-05 08:59:23 cloud01 nova-compute enabled :-) 2011-04-05 08:59:17

root@cloud02:~/openstack# euca-describe-instances RESERVATION r-etss4atr NOVAADMPRJ default INSTANCE i-00000002 ami-0640184b 10.0.0.3 10.0.0.3 failed to spawn novaadm (NOVAADMPRJ, cloud01) 0 m1.small 2011-04-05T08:27:10Z nova

=============================================================================

2011-04-05 17:48:41,923 AUDIT nova.compute.manager [M39CSSE2RGAXIFEYZ6HY novaadm NOVAADMPRJ] instance 2: starting... 2011-04-05 17:48:42,018 DEBUG nova.rpc [-] Making asynchronous call on network.cloud02 ... from (pid=4290) call /root/openstack/nova/nova/rpc.py:345 2011-04-05 17:48:42,019 DEBUG nova.rpc [-] MSG_ID is 1d0b70e00fc94c14a7ed0bc91a17dd60 from (pid=4290) call /root/openstack/nova/nova/rpc.py:348 2011-04-05 17:48:42,458 DEBUG nova.virt.libvirt_conn [-] instance instance-00000002: starting toXML method from (pid=4290) to_xml /root/openstack/nova/nova/virt/libvirt_conn.py:899 2011-04-05 17:48:42,531 DEBUG nova.virt.libvirt_conn [-] instance instance-00000002: finished toXML method from (pid=4290) to_xml /root/openstack/nova/nova/virt/libvirt_conn.py:941 2011-04-05 17:48:42,582 INFO nova [-] called setup_basic_filtering in nwfilter 2011-04-05 17:48:42,582 INFO nova [-] ensuring static filters 2011-04-05 17:48:42,648 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=4290) inner /root/openstack/nova/nova/utils.py:594 2011-04-05 17:48:42,648 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=4290) inner /root/openstack/nova/nova/utils.py:599 2011-04-05 17:48:42,653 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,665 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,677 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,689 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,718 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /root/openstack/state/instances/instance-00000002/ from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,726 INFO nova.virt.libvirt_conn [-] instance instance-00000002: Creating image 2011-04-05 17:48:42,776 DEBUG nova.utils [-] Attempting to grab semaphore "115ec19b" for method "call_if_not_exists"... from (pid=4290) inner /root/openstack/nova/nova/utils.py:594 2011-04-05 17:48:42,777 DEBUG nova.utils [-] Running cmd (subprocess): cp /root ... (more)

edit retag flag offensive close merge delete

5 answers

Sort by ยป oldest newest most voted
0

answered 2011-04-07 00:33:00 -0500

hyunsun-moon gravatar image

Great! I will try Glance, too.

edit flag offensive delete link more
0

answered 2011-04-06 15:55:11 -0500

vishvananda gravatar image

FYI, you can use glance without swift. It has a local storage backend that will store the images on the filesystem of the glance host. Glad it is working though.

Vish

On Apr 6, 2011, at 12:19 AM, Hyunsun Moon wrote:

Question #151725 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+q...

Status: Answered => Solved

Hyunsun Moon confirmed that the question is solved: Thanks Vish, Everett. Solved this problem by mounting controller's images directory on the compute host using nfs. Unfortunately, I have no Swift running on my system so that I cannot use Glance.


You received this question notification because you are a member of Nova Core, which is an answer contact for OpenStack Compute (nova).

edit flag offensive delete link more
0

answered 2011-04-06 07:19:42 -0500

hyunsun-moon gravatar image

Thanks Vish, Everett. Solved this problem by mounting controller's images directory on the compute host using nfs. Unfortunately, I have no Swift running on my system so that I cannot use Glance.

edit flag offensive delete link more
0

answered 2011-04-05 15:36:10 -0500

everett-toews gravatar image

You can find more info on using Glance with Nova at Configuring Image Service and Storage for Computehttp://docs.openstack.org/trunk/openstack-compute-admin/content/configuring-compute-to-use-the-image-service.htmland Running Compute with the OpenStack Imaging Servicehttp://docs.openstack.org/trunk/openstack-compute-admin/content/ch_introduction-to-openstack-imaging-service.html

edit flag offensive delete link more
0

answered 2011-04-05 15:21:39 -0500

vishvananda gravatar image

You only need one scheduler. The error you are seeing is because need a way for the compute node on the second machine to retrieve images, either by sharing the folder that the local image service is using or by (recommended) installing glance and using --image_service=nova.image.glance.GlanceService --glance_host=xxxx

Vish On Apr 5, 2011, at 2:01 AM, Hyunsun Moon wrote:

Question #151725 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+q...

Description changed to: Hello,

I have deployed latest nova on 2 host machines, one for controller(nova- api, nova-scheduler, nova-network, nova-objectstore) and the other for compute(nova-compute, nova-scheduler).

Problem is, as I hit run instance from the controller node, following compute error occurs on the compute node. nova-compute, nova-scheduler service was successfully added on the service table and euca-describe-instance show the state of instance spawning fail.

root@cloud02:~/openstack# nova-manage service list cloud02 nova-scheduler enabled :-) 2011-04-05 09:01:01 cloud02 nova-network enabled :-) 2011-04-05 09:01:01 cloud01 nova-scheduler enabled :-) 2011-04-05 08:59:23 cloud01 nova-compute enabled :-) 2011-04-05 08:59:17

root@cloud02:~/openstack# euca-describe-instances RESERVATION r-etss4atr NOVAADMPRJ default INSTANCE i-00000002 ami-0640184b 10.0.0.3 10.0.0.3 failed to spawn novaadm (NOVAADMPRJ, cloud01) 0 m1.small 2011-04-05T08:27:10Z nova

=============================================================================

2011-04-05 17:48:41,923 AUDIT nova.compute.manager [M39CSSE2RGAXIFEYZ6HY novaadm NOVAADMPRJ] instance 2: starting... 2011-04-05 17:48:42,018 DEBUG nova.rpc [-] Making asynchronous call on network.cloud02 ... from (pid=4290) call /root/openstack/nova/nova/rpc.py:345 2011-04-05 17:48:42,019 DEBUG nova.rpc [-] MSG_ID is 1d0b70e00fc94c14a7ed0bc91a17dd60 from (pid=4290) call /root/openstack/nova/nova/rpc.py:348 2011-04-05 17:48:42,458 DEBUG nova.virt.libvirt_conn [-] instance instance-00000002: starting toXML method from (pid=4290) to_xml /root/openstack/nova/nova/virt/libvirt_conn.py:899 2011-04-05 17:48:42,531 DEBUG nova.virt.libvirt_conn [-] instance instance-00000002: finished toXML method from (pid=4290) to_xml /root/openstack/nova/nova/virt/libvirt_conn.py:941 2011-04-05 17:48:42,582 INFO nova [-] called setup_basic_filtering in nwfilter 2011-04-05 17:48:42,582 INFO nova [-] ensuring static filters 2011-04-05 17:48:42,648 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=4290) inner /root/openstack/nova/nova/utils.py:594 2011-04-05 17:48:42,648 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=4290) inner /root/openstack/nova/nova/utils.py:599 2011-04-05 17:48:42,653 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,665 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,677 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 17:48:42,689 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=4290) execute /root/openstack/nova/nova/utils.py:150 2011-04-05 ...

(more)
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-04-05 08:56:53 -0500

Seen: 88 times

Last updated: Apr 07 '11