Ask Your Question

Revision history [back]

Hi,

A common use case is ussing SSD as journal, often these journal disks are used by various OSDs, with a recommended relationship of 3-4 OSD per journal disk. 12 OSDs using a single journal SSD disk will result in an imaginable amount of data lost in case of SSD failure. I would not recommend to do that and at lest add another SSD into your cluster ddesig to have a 1:6 relationship.

Going back at how to add more that one OSD per journal.

When you create a OSD you need to specify journal disk, that disk can and usually is a partition in the SSD disk.

Prepare the disk selecting a journal disk.

ceph-deploy osd prepare {node-name}:{data-disk}[{:journal-disk}]

Then activate it in a SSD disk partition.

ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]

Also, you can do it with a single command who creates and activate at once, where path to journal can be SSD partition number /dev/ssd1.

ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]

Adding a recomendation from ceph docs.

Note

When running multiple Ceph OSD daemons on a single node, and sharing a partioned journal with each OSD daemon, you should consider the entire node the minimum failure domain for CRUSH purposes, because if the SSD drive fails, all of the Ceph OSD daemons that journal to it will fail too.

Regards

Hi,

A common use case is ussing using SSD as journal, often these journal disks are used by various OSDs, with a recommended relationship of 3-4 OSD per journal disk. 12 OSDs using a single journal SSD disk will result in an imaginable amount of data lost in case of SSD failure. I would not recommend to do that and at lest least add another SSD into your cluster ddesig design to have a 1:6 relationship.

Going back at how to add more that one OSD per journal.

When you create a OSD you need to specify journal disk, that disk can and usually is a partition in the SSD disk.

Prepare the disk selecting a journal disk.

ceph-deploy osd prepare {node-name}:{data-disk}[{:journal-disk}]

Then activate it in a SSD disk partition.

ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]

Also, you can do it with a single command who creates and activate at once, where path to journal can be SSD partition number /dev/ssd1.

ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]

Adding a recomendation recommendation from ceph docs.

Note

When running multiple Ceph OSD daemons on a single node, and sharing a partioned journal with each OSD daemon, you should consider the entire node the minimum failure domain for CRUSH purposes, because if the SSD drive fails, all of the Ceph OSD daemons that journal to it will fail too.

Regards