Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Ceph does not respond after change crushmap

hi guys,

looking for some guides. i have deploy ceph with 3 host and 20 osd. currently use it with openstack mitaka. each server has a combination of normal disk and also ssd. our target is to allow cinder to use create a diffrent disk type. so we have change the crushmap.

and import it. when we generate and import the crushmap we do not have any issue. however, when we try to set the rbd pool to use crush_ruleset as per below command.

ceph osd pool set ssd crush_ruleset 3

it seems like that the cep server does not responding. below are some of the output when i try to run any command.

 ceph osd pool set vms crush_ruleset 4
2016-05-09 11:05:22.032232 7f8fe5213700  0 monclient: hunting for new mon
2016-05-09 11:06:31.844513 7f8fe5213700  0 monclient: hunting for new mon
2016-05-09 11:06:31.844790 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fd400e630 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fd400fac0).fault
2016-05-09 11:06:34.468016 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc001480 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc002760).fault
2016-05-09 11:06:46.468355 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc0058d0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc006bb0).fault
2016-05-09 11:07:40.468688 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc007e60 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc004bb0).fault
2016-05-09 11:08:40.469123 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc006090 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc007e60).fault
2016-05-09 11:09:10.469410 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc009280 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc002d90).fault
2016-05-09 11:10:10.470039 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc006090 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc008870).fault
2016-05-09 11:10:40.469951 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc00a490 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00b770).fault
2016-05-09 11:11:10.470270 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc006090 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc009580).fault
2016-05-09 11:12:10.470638 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc00a490 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00d000).fault
2016-05-09 11:12:40.470853 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc006090 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00db00).fault
2016-05-09 11:13:40.471328 7f8fe4110700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc00a490 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc002090).fault
2016-05-09 11:14:10.471688 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.2:6789/0 pipe(0x7f8fcc006090 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00bcd0).fault
2016-05-09 11:15:10.471954 7f8fdd6ba700  0 -- 172.18.53.1:0/2812772969 >> 172.18.53.1:6789/0 pipe(0x7f8fcc00a490 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8fcc00f3e0).fault

normally when we click ctrl+c it just stop. but this time it gives some error.

2016-05-09 11:40:12.547127 7f4c2026f700  0 -- :/492500468 >> 172.18.53.2:6789/0 pipe(0x7f4c1c05a9b0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4c1c05bc90).fault
^CTraceback (most recent call last):
  File "/usr/bin/ceph", line 948, in <module>
    retval = main()
  File "/usr/bin/ceph", line 852, in main
    prefix='get_command_descriptions')
  File "/usr/lib/python2.7/dist-packages/ceph_argparse.py", line 1291, in json_command
    raise RuntimeError('"{0}": exception {1}'.format(argdict, e))
RuntimeError: "None": exception "['{"prefix": "get_command_descriptions"}']": exception You cannot perform that operation on a Rados object in state configuring.

is there anyway to solve this?

thans.