Ask Your Question
0

Unable to access NetApp LUN on cloud controller

asked 2011-09-08 12:21:34 -0500

av-sudhir gravatar image

Hi All,

I have a working cloud with a controller & compute node, now am trying to attach NetApp LUN to the cloud controller before using the same LUN for nova-volume.

OS is ubuntu 11.04 on both nodes.


root@in01emt16:/etc/iscsi# euca-describe-availability-zones verbose AVAILABILITYZONE nova available AVAILABILITYZONE |- in01emt16 AVAILABILITYZONE | |- nova-scheduler enabled :-) 2011-09-08 11:40:40 AVAILABILITYZONE | |- nova-network enabled :-) 2011-09-08 11:40:41 AVAILABILITYZONE | |- nova-volume enabled :-) 2011-09-08 11:40:40 AVAILABILITYZONE |- in01emt17 AVAILABILITYZONE | |- nova-compute enabled :-) 2011-09-08 11:40:37


I am following these steps on the initiator side:

Lun is already created on the NetApp(target) box by storage admin


  1. Ping in01filer04(Netapp filer) PING in01filer04 (10.144.201.215) 56(84) bytes of data. 64 bytes from in01filer04 (10.144.201.215): icmp_req=1 ttl=252 time=0.208 ms 64 bytes from in01filer04 (10.144.201.215): icmp_req=2 ttl=252 time=0.202 ms

  2. root@in01emt16:~# iscsi-iname iqn.2005-03.org.open-iscsi:b0cecc8a9df

use same iscsi name for initiator in cat /etc/iscsi/initiatorname.iscsi root@in01emt16:~# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2005-03.org.open-iscsi:b0cecc8a9df

  1. root@in01emt16:/etc/iscsi# iscsiadm -m discovery -t st -p 10.144.201.215 10.144.201.215:3260,2000 iqn.1992-08.com.netapp:sn.135066996

  2. root@in01emt16:/etc/iscsi# iscsiadm -m node 10.144.201.215:3260,2000 iqn.1992-08.com.netapp:sn.135066996

  3. root@in01emt16:/etc/iscsi# iscsiadm -m node -T iqn.1992-08.com.netapp:sn.135066996 -p 10.144.201.215:3260 --login Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.135066996, portal: 10.144.201.215,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.135066996, portal: 10.144.201.215,3260]: successful

  4. tail -f /var/log/syslog Sep 8 17:34:42 in01emt16 kernel: [ 624.070725] scsi4 : iSCSI Initiator over TCP/IP Sep 8 17:34:42 in01emt16 iscsid: connection2:0 is operational now


Though the authentication everything goes well still I am unable to see the LUN when I do "fdisk -l".

In the /etc/iscsi/iscsi/initiatorname.iscsi if I change the initiator name to NetApp filer - iqn.1992-08.com.netapp:sn.135066996 then I see in


tail -f /var/log/syslog

Sep 8 17:38:51 in01emt16 kernel: [ 873.357048] scsi6 : iSCSI Initiator over TCP/IP Sep 8 17:38:51 in01emt16 kernel: [ 873.612856] scsi 6:0:0:0: Direct-Access NETAPP LUN 7320 PQ: 0 ANSI: 4 Sep 8 17:38:51 in01emt16 kernel: [ 873.613079] sd 6:0:0:0: Attached scsi generic sg4 type 0 Sep 8 17:38:51 in01emt16 kernel: [ 873.613729] sd 6:0:0:0: [sdc] 419430400 512-byte logical blocks: (214 GB/200 GiB) Sep 8 17:38:51 in01emt16 kernel: [ 873.613983] sd 6:0:0:0: [sdc] Write Protect is off Sep 8 17:38:51 in01emt16 kernel: [ 873.613988] sd 6:0:0:0: [sdc] Mode Sense: bd 00 00 08 Sep 8 17:38:51 in01emt16 kernel: [ 873.614549] sd 6:0:0:0: [sdc] Write ...

(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2011-09-14 10:25:27 -0500

av-sudhir gravatar image

Hi All,

Finally I am able to access NetApp iSCSI LUN from cloud controller & attach it to instance only if nova-compute is running on the controller. If I have an external compute node outside controller then the same nova-volume is not getting attached to the instance running on external compute node.

On NetAPP filer the Initiator name of both controller & compute node is included so both can see the iSCSI LUN from the filer, thats not an issue.

If we have nova-volume configured on the controller then the same nova-volume is available on compute node, also to the instances that comes up on compute node.

But with iSCSI from filer though I attach the volume to instance the command doesn't give any error but the volume is not getting attached to instance.

I think theoretically When we define/create a LVM volume through openstack euca-create-volume the same volume will be available both on controller(10.144.199.85) & compute node(10.144.199.86).

At present the issue is nova-volume is only accessible on controller (10.144.199.85) & not to the external compute node(10.144.199.86).

What could be the cause for my issue, has anyone faced similar issue.

root@in01emt17:~# euca-attach-volume -i i-00000023 -d /dev/vdc vol-0000000b VOLUME vol-0000000b

root@in01emt16:~# tail -f /var/log/nova/nova-api.log

2011-09-14 15:40:04,710 DEBUG nova.auth.manager [-] Looking up user: u'd6729565-6494-459e-8bbd-6023b7dd3a2d' from (pid=7108) authenticate /usr/lib/pymodules/python2.7/nova/auth/manager.py:268 2011-09-14 15:40:04,713 DEBUG nova.auth.manager [-] user: User('novaadmin', 'novaadmin', 'd6729565-6494-459e-8bbd-6023b7dd3a2d', '117954fd-a45a-4f18-b8f0-992cb5967729', True) from (pid=7108) authenticate /usr/lib/pymodules/python2.7/nova/auth/manager.py:270 2011-09-14 15:40:04,737 DEBUG nova.signer [-] using _calc_signature_2 from (pid=7108) _calc_signature_2 /usr/lib/pymodules/python2.7/nova/auth/signer.py:126 2011-09-14 15:40:04,738 DEBUG nova.signer [-] query string: AWSAccessKeyId=d6729565-6494-459e-8bbd-6023b7dd3a2d%3Aproj&Action=AttachVolume&Device=%2Fdev%2Fvdc&InstanceId=i-00000023&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2011-09-14T10%3A10%3A04&Version=2009-11-30&VolumeId=vol-0000000b from (pid=7108) _calc_signature_2 /usr/lib/pymodules/python2.7/nova/auth/signer.py:142 2011-09-14 15:40:04,738 DEBUG nova.signer [-] string_to_sign: GET 10.144.199.85:8773 /services/Cloud/ AWSAccessKeyId=d6729565-6494-459e-8bbd-6023b7dd3a2d%3Aproj&Action=AttachVolume&Device=%2Fdev%2Fvdc&InstanceId=i-00000023&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2011-09-14T10%3A10%3A04&Version=2009-11-30&VolumeId=vol-0000000b from (pid=7108) _calc_signature_2 /usr/lib/pymodules/python2.7/nova/auth/signer.py:144 2011-09-14 15:40:04,738 DEBUG nova.signer [-] len(b64)=44 from (pid=7108) _calc_signature_2 /usr/lib/pymodules/python2.7/nova/auth/signer.py:147 2011-09-14 15:40:04,738 DEBUG nova.signer [-] base64 encoded digest: RAfWnyZHQ2TRpjLr7aUO3JYuGTj8hvTWbi/Ua0ao9y4= from (pid=7108) _calc_signature_2 /usr/lib/pymodules/python2.7/nova/auth/signer.py:148 2011-09-14 15:40:04,739 DEBUG nova.auth.manager [-] user.secret: 117954fd-a45a-4f18-b8f0-992cb5967729 from (pid=7108) authenticate /usr/lib/pymodules/python2.7/nova/auth/manager.py:314 2011-09-14 15:40:04,739 DEBUG nova.auth.manager [-] expected_signature: RAfWnyZHQ2TRpjLr7aUO3JYuGTj8hvTWbi/Ua0ao9y4= from (pid=7108) authenticate /usr/lib ... (more)

edit flag offensive delete link more
0

answered 2011-09-16 05:56:10 -0500

av-sudhir gravatar image

Noticed following error in nova-volume.log, added the Netapp filer IP in /etc/hosts on the instance & reattached the volume again which was attaching know. Tried creating new instances now & it was attaching with no issues now :

2011-09-15 12:13:11,853 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscs iadm -m discovery -t sendtargets -p in01emt16 from (pid=1755) execute /usr/lib/p ymodules/python2.7/nova/utils.py:150 2011-09-15 12:13:11,873 ERROR nova [-] Exception during message handling (nova): TRACE: Traceback (most recent call last): (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/rpc.py", line 188, in _receive (nova): TRACE: rval = node_func(context=ctxt, *node_args) (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 105, in decorated_function (nova): TRACE: function(self, context, instance_id, *args, *kwargs) (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 743, in attach_volume (nova): TRACE: volume_id) (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/manager.py", line 164, in setup_compute_volume (nova): TRACE: path = self.driver.discover_volume(context, volume_ref) (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py", line 446, in discover_volume (nova): TRACE: iscsi_properties = self._get_iscsi_properties(volume) (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py", line 407, in _get_iscsi_properties (nova): TRACE: (volume['name'])) (nova): TRACE: Error: Could not find iSCSI export for volume volume-0000000b (nova): TRACE:

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-09-08 12:21:34 -0500

Seen: 164 times

Last updated: Sep 16 '11