Ask Your Question

Juan Carlos Franzoy's profile - activity

2018-05-31 08:24:13 -0500 received badge  Popular Question (source)
2018-05-31 08:24:13 -0500 received badge  Notable Question (source)
2017-11-23 16:08:37 -0500 asked a question Why 'openstack overcloud deploy' tries to modify ceph disks the second time it's run?

Hi.

Erlier this year I installed an openstack lab using tripleo newton. The lab had 1 controller node, 2 ceph nodes and 1 comute node.

After a while, I managed to make it work. Most of the troubles came from the fact that I had little disk space in the servers asigned to ceph and I had to use an external iscsi storage which was available. To make this work, I run an script using OS::TripleO::Tasks::CephStoragePreConfig that mount two iscsi devices in /dev/sdc (data) and /dev/sdb (journal).

A few days ago, I tried to add a computer node to the lab. The baremetal provisioning and instrospection in the undercloud did'nt present any problem. But when I ran 'openstack overcloud deploy ...' it raises an error in CephStorage_Step3.

It seems that, in the storage nodes, it tried to run 'ceph-disk prepare' which failed because 'Device is mounted'

I supose it was wrong to run 'ceph-disk prepare' if ceph nodes does not have any configuration change. So I modified /usr/share/openstack-puppet/modules/ceph/manifests/osd.pp to avoid this command and it worked!!.

Of course this is not a clean solution at all, I wonder what is wrong in my configuration or if there is a bug in ceph-puppet or tripleo.

I'll be glad to get some help to understand the problem.

Thanks.


DETAILS

In this lab I use 'ConfigDebug: true' so in the ceph nodes I got detailed logs. This is a summary

...
Nov 23 00:05:08 overcloud-cephstorage-0 os-collect-config[1806]: dib-run-parts Thu Nov 23 00:05:08 UTC 2017 Running /usr/libexec/os-refresh-config/configure.d/55-heat-config
...
Nov 23 00:05:08 overcloud-cephstorage-0 os-collect-config[1806]: [2017-11-23 00:05:08,624] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/puppet < /var/lib/heat-config/deployed/4ef49b14-af59-4359-9b23-2b9380aa4955.json
...
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: ++ readlink -f /dev/sdc
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: + disk=/dev/sdc
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: + test -b /dev/sdc
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: ++ readlink -f /dev/sdc
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: ++ readlink -f /dev/sdb
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: + ceph-disk prepare --cluster ceph --cluster-uuid 70e88dc2-7c7d-11e7-a816-52540031dc55 /dev/sdc /dev/sdb
Notice: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-prepare-/dev/sdc]/returns: ceph-disk: Error: Device is mounted: /dev/sdc1
...
Warning: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[fcontext_/dev/sdc]: Skipping because of failed dependencies
Warning: /Stage[main]/Ceph::Osds/Ceph::Osd[/dev/sdc]/Exec[ceph-osd-activate-/dev/sdc]: Skipping because of failed dependencies
Warning: /Firewall[998 log all]: Skipping because of failed dependencies
Warning: /Firewall[999 drop all]: Skipping because of failed dependencies

"deploy_status_code": 4

I modified /usr/share/openstack-puppet/modules/ceph/manifests ... (more)

2017-03-30 07:08:44 -0500 received badge  Enthusiast
2017-03-21 11:13:38 -0500 asked a question How to configure different disk assignment for different CephStorage nodes using tripleo newton

Hello, thanks for reading my question.

I am trying to install an OpenStack small laboratory in my company: 3 controllers nodes, 2 ceph nodes and 1 compute node.

I've used 'ceph::profile::params::osds' to configure the devices to be used by CephStorage nodes. It worked!

Now I want to setup different devices on different CephStorage nodes. For example use /dev/sdb in overcloud-cephstorage-0 and /dev/sdc in overcloud-cephstorage-1.

Y read in ceph documentation, that it can be achieved using 'crush'. What I haven't realized is how to do using tripleo.

Note: I can redeploy the whole cloud if necesary.