Ask Your Question

sunzen.wang's profile - activity

2018-12-24 00:08:50 -0500 received badge  Enlightened (source)
2018-12-24 00:08:50 -0500 received badge  Good Answer (source)
2018-09-12 12:40:19 -0500 received badge  Student (source)
2018-09-12 12:40:06 -0500 received badge  Nice Answer (source)
2016-08-09 02:57:17 -0500 received badge  Self-Learner (source)
2016-08-09 02:57:17 -0500 received badge  Teacher (source)
2016-07-12 05:22:50 -0500 received badge  Popular Question (source)
2016-07-12 05:22:50 -0500 received badge  Notable Question (source)
2016-04-15 03:41:15 -0500 commented answer Running the command glance image-list I get the error: finding address for http://controller:9292/v1/images/detail

How do you change permission on a glance library file?

2016-02-27 21:29:25 -0500 received badge  Famous Question (source)
2016-02-27 21:29:25 -0500 received badge  Notable Question (source)
2016-01-28 10:33:51 -0500 received badge  Popular Question (source)
2016-01-20 02:08:53 -0500 answered a question failed to delete a volume as the backend lv is shown open

I got it. #finding it is helpful to post question onto this site, even though there is no response yet.

It is related with targetcli.

#targetcli
> ls 
> cd backstore/block
> ls
> delete iqn.2010-10.org.openstack:volume-a7c7ef18-e52f-4149-b610-8a3f1c3751b1
>

Now lvs show the logical volume is not open any longer.

  volume-a7c7ef18-e52f-4149-b610-8a3f1c3751b1    cinder-volumes -wi-a----- 500.00g

I believe it can be deleted now.

While, it's a puzzle that consistency is not well maintained. I suspect there is some issue in Cinder internal.
Another thing to investigate.

2016-01-20 01:13:44 -0500 asked a question failed to delete a volume as the backend lv is shown open

When deleting a volume which is migrated from another backend to LVM, it fails, and cinder-volume.log shows the backend logical volume is in use, as following:

TRACE oslo.messaging.rpc.dispatcher Stderr: u'File descriptor 7 (/dev/urandom) leaked on lvremove invocation. Parent PID 27071: /usr/bin/python2\n Logical volume cinder-volumes/volume-a7c7ef18-e52f-4149-b610-8a3f1c3751b1 in use.\n'

Using dmsetup and lvs, it is clear that the logical volume is open.

# dmsetup info cinder--volumes-volume--a7c7ef18--e52f--4149--b610--8a3f1c3751b1
Name:              cinder--volumes-volume--a7c7ef18--e52f--4149--b610--8a3f1c3751b1
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 8
Number of targets: 1

While, there is no open trace by lsof, so I don't know which process opened the logical volume. Other way to change the open attribute of logical volume is not found yet.

lvchange -an <given lv=""> didn't work. Neither did deleting symlinks to the /dev/dm-* devices. Any information from you is appreciated.

Following are some additional information for reference. Migrating from another backend to LVM backend, the volume is available now, shown as below

90b457db-fc41-412a-a9d2-aef64ac65133 |   available    | cyl-fs

os-vol-mig-status-attr:Name_id shows another id after migration

|     os-vol-mig-status-attr:name_id    | a7c7ef18-e52f-4149-b610-8a3f1c3751b1 |

Information about the backend logical volume by lvs

# lvs | grep a7c7e
  volume-a7c7ef18-e52f-4149-b610-8a3f1c3751b1    cinder-volumes -wi-ao---- 500.00g

Last and not least, not all volumes migrated from another backend to LVM are of such status, among 5 migrated volumes, one volume can be deleted, as its backend logical volume's attributes are -wi-a-----(no open count).

2016-01-18 04:38:17 -0500 received badge  Famous Question (source)
2016-01-17 02:42:51 -0500 received badge  Famous Question (source)
2016-01-17 02:40:35 -0500 received badge  Notable Question (source)
2016-01-17 02:40:35 -0500 received badge  Popular Question (source)
2016-01-15 03:57:00 -0500 answered a question what's the good way to handle lock for Cinder LVM backend storage?

Let me update.

The 'pending' lock shows some lv operation is doing. Found out it takes much time to delete a volume, as it always zero-ing the backend lv according to default configuration entry: volume_clear=zero

So it helps to change configuration to "volume_clear=none". Of course, it is acceptable in current environment.

2016-01-15 03:47:19 -0500 received badge  Editor (source)
2016-01-15 03:45:49 -0500 answered a question Error to rebuild an instance with volume attached (in Juno)

Found the same issue in bugs management

https://bugs.launchpad.net/nova/+bug/1440762

Applying the fix into juno, there is no problem now.

2016-01-15 03:42:14 -0500 commented question Error to rebuild an instance with volume attached (in Juno)

Thanks for your interest and information. I will post information about fix in Openstack.

2016-01-15 03:39:48 -0500 received badge  Notable Question (source)
2015-12-28 23:18:15 -0500 received badge  Popular Question (source)
2015-12-28 04:25:16 -0500 commented question Error to rebuild an instance with volume attached (in Juno)

Thanks for your attention. It's LVM.

2015-12-24 21:29:23 -0500 asked a question Error to rebuild an instance with volume attached (in Juno)

In the current environment, it is ok to rebuild an instance without any attached cinder volume. While, when there is a volume attached, either boot volume or data volume, error always occurs when an instance is to be rebuilt. Related log shows it failed to destroy that instance.

2015-12-24 18:25:42.633 26486 ERROR oslo.messaging.rpc.dispatcher [req-67fec45f-c7ba-4c4f-9f4c-0a9875d94069 ] Exception during message handling: Failed to terminate process 36082 with SIGKILL: Device or resource busy
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 414, in decorated_function
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139, in inner
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     return func(*args, **kwargs)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     payload)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 298, in decorated_function
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     pass
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-12-24 18:25:42.633 26486 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_ ...
(more)
2015-12-22 21:28:02 -0500 asked a question what's the good way to handle lock for Cinder LVM backend storage?

currently locking_type = 1 is set for LVM. while sometimes there is remaining lock files, such as

# ls /run/lock/lvm/V_cinder-volumes
V_cinder-volumes      V_cinder-volumes:aux

This causes other LVM management operations like pvs/lvs get pending, and also some remaining LVM operations submitted from cinder-volume service, shown as following:

# systemctl status openstack-cinder-volume
openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Tue 2015-12-22 16:13:06 CST; 18h ago
 Main PID: 37633 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─  960 /sbin/lvremove --config activation { retry_deactivation = 1}  -f cinder-volumes/clone-snap-c2917d7a-8146-4bff-8e7b-6c68f581d565
           ├─  974 /sbin/vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes
           ├─ 5455 /sbin/lvremove --config activation { retry_deactivation = 1}  -f cinder-volumes/volume-68309258-78a4-48bc-af7c-3ecce42a9800
           ├─ 5773 /sbin/lvremove --config activation { retry_deactivation = 1}  -f cinder-volumes/volume-d40612df-fefe-48ec-852a-aaca3b8d1a73
           ├─ 7328 /sbin/vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes
           ├─ 8003 /sbin/lvcreate -n volume-3e7d1527-061f-4bff-aae2-c4486f8ef466 cinder-volumes -L 1g
           ├─ 8017 /sbin/lvs --noheadings --unit=g -o vg_name,name,size --nosuffix cinder-volumes
           ├─ 8030 /sbin/lvs --noheadings --unit=g -o vg_name,name,size --nosuffix cinder-volumes
           ├─37633 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
           └─37649 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

I wonder what's the good way to handle lock for cinder LVM backend storage.

Previously, when i want to conduct LVM management operations (to check what causes no response from some cinder volume operations ), I deleted those remaining file lock. Now i get aware that maybe cause more remaining LVM operations submitted from cinder-volume service. What a dilemma!

Could anyone help me make LVM work well for Cinder? Lock is one point. any other information is appreciated.

Regards

Sunzen

2015-12-22 20:41:11 -0500 answered a question NoMoreTargets: An unknown exception occurred when attaching a volume to instance

No comment yet. Let me add up an piece of update. Thinking it is somewhat strange that there is a corresponding record in iscsi_targets table, I decided to try deleting that record, attaching volume re-starts to work. while, I have not understood what's the root-cause. DB sync/update error?

2015-12-22 20:39:12 -0500 commented question NoMoreTargets: An unknown exception occurred when attaching a volume to instance

No comment yet. Let me add up an piece of update. Thinking it is somewhat strange that there is a corresponding record in iscsi_targets table, I decided to try deleting that record, attaching volume re-starts to work. while, I have not understood what's the root-cause. DB sync/update error?

2015-12-22 10:03:56 -0500 asked a question NoMoreTargets: An unknown exception occurred when attaching a volume to instance

The critical log in related cinder volume node is as following:

2015-12-22 18:15:48.911 43956 ERROR cinder.volume.manager [req-3173227d-7cfd-4549-8ab3-10821fff373c 6e34546a7f604db083db828e982c723c ef43a0aabdea40a29f1870a377db7e90 - - -] Unable to create export for volume eeacca5c-5e0c-49a2-a362-65b56f11b1ee
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager Traceback (most recent call last):
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 902, in initialize_connection
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     volume)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     return f(*args, **kwargs)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 548, in create_export
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     return self._create_export(context, volume)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 560, in _create_export
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     self.configuration)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/iscsi.py", line 42, in create_export
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     max_targets)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/iscsi.py", line 148, in _get_target_and_lun
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     volume['host'])
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/db/api.py", line 167, in volume_allocate_iscsi_target
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     return IMPL.volume_allocate_iscsi_target(context, volume_id, host)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/db/sqlalchemy/api.py", line 154, in wrapper
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     return f(*args, **kwargs)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/db/sqlalchemy/api.py", line 210, in wrapped
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     return f(*args, **kwargs)
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/db/sqlalchemy/api.py", line 1014, in volume_allocate_iscsi_target
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager     raise exception.NoMoreTargets()
2015-12-22 18:15:48.911 43956 TRACE cinder.volume.manager NoMoreTargets: An unknown exception occurred.

The event point is at volume_allocate_iscsi_target of cinder/db/sqlalchemy/api.py:

def volume_allocate_iscsi_target(context, volume_id, host):
    session = get_session()
    with session.begin():
        iscsi_target_ref = model_query(context, models.IscsiTarget,
                                       session=session, read_deleted="no").\
            filter_by(volume=None).\
            filter_by(host=host).\
            with_lockmode('update').\
            first()

        # NOTE(vish): if with_lockmode isn't supported, as in sqlite,
        #             then this has concurrency issues
        if not iscsi_target_ref:
            raise exception.NoMoreTargets()

        iscsi_target_ref.volume_id ...
(more)