Ask Your Question
0

instance reboot failure using ceph and swift

asked 2015-02-20 11:03:27 -0500

bobyakov gravatar image

updated 2015-02-20 11:04:52 -0500

Hi All,

Pretty stumped, need help. So I'm using ceph for cinder storage and swift for glance. Have a cluster set up for controllers, using Icehouse and ceph firefly.

If I create a new instance after I point compute nodes to ceph, everything works. I am also able boot from volume using raw image.

ISSUE: I am unable to reboot an instance or start instance that was created prior to pointing compute node to use ceph/rdb. receive below error.

Why is existing instance even looking for RDB/Ceph disk information since non-exists yet, there are no volumes attached to this instance.

Error:

Failed to start domain instance-0000002a error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=rbd:tst_datastore/883fe236-d35a-4c9a-b816-6edbd0f5d30d_disk:id=tst_ceph:key=AQD9BtVUiLPFNRAA7gcdq6bzDQwQ1uyVtCR3kw==:auth_supported=cephx\;none:mon_host=192.168.8.11\:6789\;192.168.8.12\:6789\;192.168.8.13\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none: error reading header from 883fe236-d35a-4c9a-b816-6edbd0f5d30d_disk
qemu-system-x86_64: -drive file=rbd:tst_datastore/883fe236-d35a-4c9a-b816-6edbd0f5d30d_disk:id=tst_ceph:key=AQD9BtVUiLPFNRAA7gcdq6bzDQwQ1uyVtCR3kw==:auth_supported=cephx\;none:mon_host=192.168.8.11\:6789\;192.168.8.12\:6789\;192.168.8.13\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image rbd:tst_datastore/883fe236-d35a-4c9a-b816-6edbd0f5d30d_disk:id=tst_ceph:key=AQD9BtVUiLPFNRAA7gcdq6bzDQwQ1uyVtCR3kw==:auth_supported=cephx\;none:mon_host=192.168.8.11\:6789\;192.168.8.12\:6789\;192.168.8.13\:6789: Could not open 'rbd:tst_datastore/883fe236-d35a-4c9a-b816-6edbd0f5d30d_disk:i

Compute Nova.conf:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
memcached_servers=192.168.8.2:11211
auth_strategy = keystone

glance_host = 192.168.8.2
glance_api_servers=192.168.8.2:9292

rpc_backend = rabbit
rabbit_hosts = 192.168.8.6,192.168.8.7
rabbit_password = password

#libvirt_images_type=rbd
#libvirt_images_rbd_pool=tst_datastore
#libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=tst_ceph
rbd_secret_uuid=Actual UUID #
#rbd_pool=tst_datastore
#libvirt_inject_password=false
#libvirt_inject_key=false
#libvirt_inject_partition=-2

my_ip = 192.168.8.17
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.8.17
novncproxy_base_url = http://24.246.X.X:6080/vnc_auto.html

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://192.168.8.2:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = quantum
neutron_admin_password = password
neutron_admin_auth_url = http://192.168.8.2:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = password

[database]
connection = mysql://nova:password@192.168.8.2/nova

[keystone_authtoken]
auth_uri = http://192.168.8.2:5000
auth_host = 192.168.8.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password

Controller cinder.conf:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes

volume_driver=cinder.volume.drivers.rbd ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-02-20 12:57:49 -0500

bobyakov gravatar image

I believe I found the answer:

On the compute node nova.conf needed below change

instead of: #libvirt_images_type=rbd #libvirt_images_rbd_pool=tst_datastore

Used: rbd_user=username rbd_secret_uuid=Secret UUID rbd_pool=tst_datastore

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-02-20 11:03:27 -0500

Seen: 243 times

Last updated: Feb 20 '15