Ask Your Question
0

Ceph Openstack

asked 2017-02-22 14:25:24 -0500

mesutaygun gravatar image

updated 2017-02-22 17:27:41 -0500

Call Trace:

[  842.758434]  [<ffffffff81834bf0>] ? bit_wait+0x60/0x60
[  842.758640]  [<ffffffff818343f5>] schedule+0x35/0x80
[  842.758837]  [<ffffffff81837515>] schedule_timeout+0x1b5/0x270
[  842.759066]  [<ffffffff813c7bd6>] ? submit_bio+0x76/0x170
[  842.759279]  [<ffffffff81834bf0>] ? bit_wait+0x60/0x60
[  842.759479]  [<ffffffff81833924>] io_schedule_timeout+0xa4/0x110
[  842.759709]  [<ffffffff81834c0b>] bit_wait_io+0x1b/0x70
[  842.759927]  [<ffffffff8183479d>] __wait_on_bit+0x5d/0x90
[  842.760255]  [<ffffffff8124a3a0>] ? blkdev_readpages+0x20/0x20
[  842.760498]  [<ffffffff8118e2db>] wait_on_page_bit+0xcb/0xf0
[  842.760728]  [<ffffffff810c4280>] ? autoremove_wake_function+0x40/0x40
[  842.760979]  [<ffffffff8118e509>] wait_on_page_read+0x49/0x50
[  842.761201]  [<ffffffff8118fb7d>] do_read_cache_page+0x8d/0x1b0
[  842.761466]  [<ffffffff8118fcb9>] read_cache_page+0x19/0x20
[  842.761691]  [<ffffffff813db16d>] read_dev_sector+0x2d/0x90
[  842.761910]  [<ffffffff813e1c1d>] read_lba+0x14d/0x210
[  842.762116]  [<ffffffff813e24d2>] efi_partition+0xf2/0x7d0
[  842.762334]  [<ffffffff81401a7b>] ? string.isra.4+0x3b/0xd0
[  842.762557]  [<ffffffff814039b9>] ? snprintf+0x49/0x60
[  842.762763]  [<ffffffff813e23e0>] ? compare_gpts+0x280/0x280
[  842.762984]  [<ffffffff813dc54e>] check_partition+0x13e/0x220
[  842.763208]  [<ffffffff813dba50>] rescan_partitions+0xc0/0x2b0
[  842.763436]  [<ffffffff8124b29d>] __blkdev_get+0x30d/0x460
[  842.763650]  [<ffffffff8124b85d>] blkdev_get+0x12d/0x340
[  842.763855]  [<ffffffff812295f9>] ? unlock_new_inode+0x49/0x80
[  842.764171]  [<ffffffff8124a238>] ? bdget+0x118/0x130
[  842.764370]  [<ffffffff813d94fe>] add_disk+0x3fe/0x490
[  842.764561]  [<ffffffff814d1b39>] ? vp_get+0x59/0x80
[  842.764747]  [<ffffffff81581bd2>] virtblk_probe+0x432/0x720
[  842.764951]  [<ffffffff814cda47>] virtio_dev_probe+0x127/0x1e0
[  842.765161]  [<ffffffff8155adf2>] driver_probe_device+0x222/0x4a0
[  842.765375]  [<ffffffff8155b0f4>] __driver_attach+0x84/0x90
[  842.765574]  [<ffffffff8155b070>] ? driver_probe_device+0x4a0/0x4a0
[  842.765793]  [<ffffffff81558a1c>] bus_for_each_dev+0x6c/0xc0
[  842.765995]  [<ffffffff8155a5ae>] driver_attach+0x1e/0x20
[  842.766187]  [<ffffffff8155a0eb>] bus_add_driver+0x1eb/0x280
[  842.766401]  [<ffffffff81fb14f1>] ? loop_init+0x170/0x170
[  842.766595]  [<ffffffff8155ba00>] driver_register+0x60/0xe0
[  842.766795]  [<ffffffff814cd7b0>] register_virtio_driver+0x20/0x30
[  842.767011]  [<ffffffff81fb1542>] init+0x51/0x7e
[  842.767185]  [<ffffffff81002123>] do_one_initcall+0xb3/0x200
[  842.767388]  [<ffffffff810a0385>] ? parse_args+0x295/0x4b0
[  842.767591]  [<ffffffff81f5d1a5>] kernel_init_freeable+0x173/0x212
[  842.767812]  [<ffffffff8182be70>] ? rest_init+0x80/0x80
[  842.768028]  [<ffffffff8182be7e>] kernel_init+0xe/0xe0
[  842.768296]  [<ffffffff8183888f>] ret_from_fork+0x3f/0x70
[  842.768494]  [<ffffffff8182be70>] ? rest_init+0x80/0x80

I am using ceph cluster with openstack.I make configuration cinder conf and also connect to nova with ceph.But I giving above error while nova opening.All services fine.But vm not take disk area from ceph. Can you help me please

edit retag flag offensive close merge delete

Comments

Where does this stack trace occur? It looks like a kernel stack trace in a VM. I don't see anything Ceph- or Nova-related in this trace.

What do you mean by "nova opening"? What do you mean by "VM not take disk area from ceph" - are you trying to attach a volume to an instance, and it fails?

Bernd Bausch gravatar imageBernd Bausch ( 2017-02-22 17:29:48 -0500 )edit

Yes this is due to vm init kernel stack. But I am not sure where it comes from. I am using Ceph based disk volume. When I disconnect from ceph and use local (nova based) disk volumes, all is fine. Do you have any idea what to check?

mesutaygun gravatar imagemesutaygun ( 2017-02-23 03:09:22 -0500 )edit

I understand that you booted this instance from a Cinder volume based on a Ceph backend. Correct? Or do you use Ceph as ephemeral storage?

Is there a panic message or other text message in the VM's message buffer?

Can you boot an instance with ephemeral storage, then attach a Ceph volume?

Bernd Bausch gravatar imageBernd Bausch ( 2017-02-23 06:04:15 -0500 )edit

Did you configure Nova for Ceph attachment?

[libvirt]
...
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
Bernd Bausch gravatar imageBernd Bausch ( 2017-02-23 06:07:35 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-03-01 17:22:18 -0500

mesutaygun gravatar image

This problem maybe about mtu size.We configured mtu size edit over physical switch(9214).After resolved. Thanks for contrubution.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-02-22 14:25:24 -0500

Seen: 250 times

Last updated: Mar 01 '17