Ask Your Question

ghostist's profile - activity

2019-06-19 03:50:01 -0500 received badge  Famous Question (source)
2016-12-19 06:48:04 -0500 received badge  Notable Question (source)
2016-11-23 04:06:16 -0500 received badge  Famous Question (source)
2016-09-28 20:08:09 -0500 received badge  Popular Question (source)
2016-09-08 12:06:03 -0500 received badge  Enthusiast
2016-08-30 13:53:33 -0500 received badge  Notable Question (source)
2016-08-30 13:49:41 -0500 asked a question Instance is getting 2 ports attached with one of which is in down status

[Test process]
1. launch an instance in horizon
2. select image, network, security rule...etc
[Test result]:
1.2 IP addresses are shown to be attached to the created instance(eg:10.156.95.47 and 10.156.95.48)
2.go to network tab and check ports: port 10.156.95.47 is in status down while port 10.156.95.48 is in status up(both are up in admin status)
3.10.156.95.48 is actually assigned to the instance

How come 2 IP address are shown? I am using provider network of openstack mitaka. I am not using any OVS tool to configure the network. FYI. This issue won't occur if you launch instances with CLI. It happens only when using horizon and heat template to create instances.

2016-08-30 08:56:22 -0500 received badge  Supporter (source)
2016-08-10 14:25:12 -0500 received badge  Popular Question (source)
2016-08-09 07:45:39 -0500 answered a question Iscsi authorization issue leads to unable to attach volumes to instance

This issue is fixed, and caused by below 3 configuration:

  1. Cache in iscsi configuration --- now set to clear cache every time iscsi restarts
  2. Set incoming user credential with CHAP in iscsi(compute node) to make authorization successful.
  3. Some daemon in RDMA is using host name, reconfigure the correct host name(previously wrongly configured)

Thank you all! Hope this might be of help.

2016-08-08 10:20:16 -0500 asked a question Iscsi authorization issue leads to unable to attach volumes to instance

Issue Description: When attaching an available volume to instance, status of this operation changed from "attaching" to "available" again without showing the point where it's supposed to attach to.

Detailed description: 1. cinder configuration: [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes rpc_backend = rabbit my_ip = 10.156.95.101 enabled_backends = lvm glance_api_servers = http://controller:9292

[database] connection = mysql+pymysql://cinder:cinder@controller/cinder

[oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = root@123

[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm

[oslo_concurrency] lock_path = /var/lib/cinder/tmp


  1. cinder log: root@fr-s-isp-opst-1:~# tail -f /var/log/cinder/cinder-volume.log 2016-08-08 10:41:00.801 2058 WARNING cinder.volume.targets.tgt [req-775e93d3-ee40-45b4-8e54-be49614df3f7 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Persistence file already exists for volume, found file at: /var/lib/cinder/volumes/volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 2016-08-08 10:41:41.395 2058 INFO cinder.volume.manager [req-e56997af-8338-472c-9ee9-fad1e6174cb0 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Terminate volume connection completed successfully. 2016-08-08 10:42:21.739 2058 INFO cinder.volume.manager [req-775e93d3-ee40-45b4-8e54-be49614df3f7 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Initialize volume connection completed successfully.

Info about volume created in storage node by tgtadm: root@fr-s-isp-opst-1:~# tgtadm --lld iscsi --op show --mode target Target 1: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 107374 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: rdwr Backing store path: /dev/cinder-volumes/volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 Backing store flags: Account information: D7Qu6WRXHCAnnFttHC5b ACL information: ALL

Erro message when I try to login from compute node(iscsi initiator) to storage node(target):

root@fr-s-it-os-cmp1:~# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387, portal: 10.156.95.101,3260] (multiple) iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387, portal: 10.156.95.101,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals

iscsi configuration(/etc/iscsi/iscsi.conf) in compute node about authorization: node.session.auth.authmethod = CHAP

// To set a CHAP username and password for initiator // authentication by the target(s), uncomment the following lines: node.session.auth.username = D7Qu6WRXHCAnnFttHC5b node.session.auth.password = yo9a43RpfdA4nNah

// To set a CHAP username and password for target(s) // authentication by the initiator, uncomment the following lines: node.session.auth.username_in = D7Qu6WRXHCAnnFttHC5b node.session.auth.password_in = yo9a43RpfdA4nNah

volume information in storage node: vi /var ...

(more)
2016-08-08 10:20:14 -0500 asked a question volume attachment issue in cinder

Issue Description: When attaching an available volume to instance, status of this operation changed from "attaching" to "available" again without showing the point where it's supposed to attach to.

Detailed description: 1. cinder configuration: [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf api_paste_confg = /etc/cinder/api-paste.ini volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes rpc_backend = rabbit my_ip = 10.156.95.101 enabled_backends = lvm glance_api_servers = http://controller:9292

[database] connection = mysql+pymysql://cinder:cinder@controller/cinder

[oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = root@123

[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm

[oslo_concurrency] lock_path = /var/lib/cinder/tmp


  1. cinder log: root@fr-s-isp-opst-1:~# tail -f /var/log/cinder/cinder-volume.log 2016-08-08 10:41:00.801 2058 WARNING cinder.volume.targets.tgt [req-775e93d3-ee40-45b4-8e54-be49614df3f7 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Persistence file already exists for volume, found file at: /var/lib/cinder/volumes/volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 2016-08-08 10:41:41.395 2058 INFO cinder.volume.manager [req-e56997af-8338-472c-9ee9-fad1e6174cb0 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Terminate volume connection completed successfully. 2016-08-08 10:42:21.739 2058 INFO cinder.volume.manager [req-775e93d3-ee40-45b4-8e54-be49614df3f7 224155b9063e4173b95da432995987fc c64329f25847469a964255fbc9144e1b - - -] Initialize volume connection completed successfully.

Info about volume created in storage node by tgtadm: root@fr-s-isp-opst-1:~# tgtadm --lld iscsi --op show --mode target Target 1: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 107374 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: rdwr Backing store path: /dev/cinder-volumes/volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387 Backing store flags: Account information: D7Qu6WRXHCAnnFttHC5b ACL information: ALL

Erro message when I try to login from compute node(iscsi initiator) to storage node(target):

root@fr-s-it-os-cmp1:~# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387, portal: 10.156.95.101,3260] (multiple) iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-6f7759c9-77ae-41e0-8dd6-7cbfb08d9387, portal: 10.156.95.101,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals

iscsi configuration(/etc/iscsi/iscsi.conf) in compute node about authorization: node.session.auth.authmethod = CHAP

// To set a CHAP username and password for initiator // authentication by the target(s), uncomment the following lines: node.session.auth.username = D7Qu6WRXHCAnnFttHC5b node.session.auth.password = yo9a43RpfdA4nNah

// To set a CHAP username and password for target(s) // authentication by the initiator, uncomment the following lines: node.session.auth.username_in = D7Qu6WRXHCAnnFttHC5b node.session.auth.password_in = yo9a43RpfdA4nNah

volume information in storage node: vi /var ...

(more)