Ask Your Question
0

Ceph does not respond after change crushmap

asked 2016-05-08 22:41:37 -0500

senyapsudah gravatar image

hi guys,

looking for some guides. i have deploy ceph with 3 host and 20 osd. currently use it with openstack mitaka. each server has a combination of normal disk and also ssd. our target is to allow cinder to use create a diffrent disk type. so we have change the crushmap.

and import it. when we generate and import the crushmap we do not have any issue. however, when we try to set the rbd pool to use crush_ruleset as per below command.

ceph osd pool set ssd crush_ruleset 3

it seems like that the cep server does not responding. below are some of the output when i try to run any command.

 ceph osd pool set vms crush_ruleset 4
2016-05-09 11:05:22.032232 7f8fe5213700  0 monclient: hunting for new mon
2016-05-09 11:06:31.844513 7f8fe5213700  0 monclient: hunting for new mon
2016-05-09 11:06:31.844790 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fd400e630 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fd400fac0).fault
2016-05-09 11:06:34.468016 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc001480 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc002760).fault
2016-05-09 11:06:46.468355 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc0058d0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc006bb0).fault
2016-05-09 11:07:40.468688 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc007e60 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc004bb0).fault
2016-05-09 11:08:40.469123 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc006090 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc007e60).fault
2016-05-09 11:09:10.469410 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc009280 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc002d90).fault
2016-05-09 11:10:10.470039 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc006090 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc008870).fault
2016-05-09 11:10:40.469951 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc00a490 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00b770).fault
2016-05-09 11:11:10.470270 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc006090 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc009580).fault
2016-05-09 11:12:10.470638 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc00a490 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00d000).fault
2016-05-09 11:12:40.470853 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-05-26 06:33:16 -0500

bene gravatar image

The error indicates your monitor went offline.

You should make sure your cluster is in a healthy state before you do this, and be careful if there is already data in your storage pools affected by this change - you may cause inadvertent data migrations. Are you sure that you have enough space in your pool?

if your crush map change is working, you should be able to see it with "ceph osd tree". Just because the crush map was accepted doesn't mean that it is working in the way you want.

I would create a separate storage pool not used by openstack first, and assign it the crush ruleset, then test it with rados bench and make sure that data is flowing to the OSDs exactly as you expect. Only then should you try to move OpenStack pools.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-05-08 22:41:37 -0500

Seen: 517 times

Last updated: May 08 '16