Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

unable to launch second instance from volume on a compute node

Hi,

as a title states I am able to create a volume from image and then to launch an instnace from that volume. But then when I create another volume and try to launch an instance from it (on the SAME compute node where the first instance is) am I getting following error:

2018-09-07 11:12:58.482 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpZyBsLx/privsep.sock']
2018-09-07 11:12:58.979 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Spawned new privsep daemon via rootwrap
2018-09-07 11:12:58.937 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon starting
2018-09-07 11:12:58.941 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with uid/gid: 0/0
2018-09-07 11:12:58.945 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2018-09-07 11:12:58.946 1785220 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon running as pid 1785220
2018-09-07 11:13:00.682 1784942 INFO nova.virt.libvirt.driver [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Creating image
2018-09-07 11:13:00.692 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpSuoEAr/privsep.sock']
2018-09-07 11:13:01.176 1784942 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] Spawned new privsep daemon via rootwrap
2018-09-07 11:13:01.135 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon starting
2018-09-07 11:13:01.139 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with uid/gid: 0/0
2018-09-07 11:13:01.144 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
2018-09-07 11:13:01.144 1785241 INFO oslo.privsep.daemon [req-55dd7a10-c75c-4835-8ea8-64fc08d5b20f dc1fe14f20fb4c8a8f0353410c73c65a 4638da733c7f44488cf21f77e844d3cc - default default] privsep daemon running as pid 1785241
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [req-577b44bc-01d7-4aeb-8c84-f5e3cb7c0a9d fe87f17ed9a24e83a35c871726108067 5adead39cd364593ac89e8010e553ecb - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Instance failed to spawn: ProcessExecutionError: Unexpected error while running command.
Command: mount -t nfs 10.38.250.51:/data/volumes /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6
Exit code: 32
Stdout: u''
Stderr: u'mount.nfs: /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6 is busy or already mounted\n'
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Traceback (most recent call last):
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2251, in _build_resources
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     yield resources
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2031, in _build_and_run_instance
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     block_device_info=block_device_info)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3084, in spawn
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     mdevs=mdevs)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5382, in _get_guest_xml
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     context, mdevs)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5172, in _get_guest_config
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     flavor, guest.os_type)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4058, in _get_guest_storage_config
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     self._connect_volume(context, connection_info, instance)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1241, in _connect_volume
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     vol_driver.connect_volume(connection_info, instance)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume/fs.py", line 117, in connect_volume
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     self._mount_options(connection_info))
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume/mount.py", line 411, in mount
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     options)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume/mount.py", line 307, in mount
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     nova.privsep.fs.mount(fstype, export, mountpoint, options)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/oslo_privsep/priv_context.py", line 207, in _wrap
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     return self.channel.remote_call(name, args, kwargs)
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]   File "/usr/lib/python2.7/dist-packages/oslo_privsep/daemon.py", line 202, in remote_call
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158]     raise exc_type(*result[2])
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] ProcessExecutionError: Unexpected error while running command.
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Command: mount -t nfs 10.38.250.51:/data/volumes /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Exit code: 32
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Stdout: u''
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Stderr: u'mount.nfs: /var/lib/nova/mnt/07ab66f11b778324939eddc9c1015ac6 is busy or already mounted\n'
2018-09-07 11:13:01.306 1784942 ERROR nova.compute.manager [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] 
2018-09-07 11:13:01.309 1784942 INFO nova.compute.manager [req-577b44bc-01d7-4aeb-8c84-f5e3cb7c0a9d fe87f17ed9a24e83a35c871726108067 5adead39cd364593ac89e8010e553ecb - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Terminating instance
2018-09-07 11:13:01.321 1784942 INFO nova.virt.libvirt.driver [req-577b44bc-01d7-4aeb-8c84-f5e3cb7c0a9d fe87f17ed9a24e83a35c871726108067 5adead39cd364593ac89e8010e553ecb - default default] [instance: 855de4fe-ba10-4aa5-8d91-74239b373158] Instance destroyed successfully.

These are the commands that I am issuing:

cinder create --image-id <debian_image_id>  --display_name=test01 --availability-zone zone1 40
nova boot --flavor m1.medium --block-device source=volume,id=<id_from_test01>,dest=volume,size=40,shutdown=preserve,bootindex=0 --availability-zone zone1:compute1 --security-groups default --key-name <debian_key> test_inst01
cinder create --image-id <debian_image_id>  --display_name=test02 --availability-zone zone1 40
nova boot --flavor m1.medium --block-device source=volume,id=<id_from_test02>,dest=volume,size=40,shutdown=preserve,bootindex=0 --availability-zone zone1:compute1 --security-groups default --key-name <debian_key> test_inst02

The last one fails with logs posted above. Openstack version is "queens"

The previous version was pike and I didn't have any problems creating several instances from volumes on the same compute node. This started hepening after I updated openstack version (cinder-volume) on the storage node.

Any reason why is this happenig?