Ask Your Question
0

Nova attaching to an incorrect cinder-volume node

asked 2018-04-18 12:32:15 -0500

jhawk gravatar image

Hello,

Firstly I'm really new to Openstack, please forgive some amount of ignorance. I'm running the latest Openstack that is in the apt repository on the on the latest Ubuntu server. I have setup 9 nova compute instances, 5 block nodes, and 4 object nodes, and an array of most of the other services. The storage nodes are doubling as compute nodes. I've pretty much just ran through the installation guide and I'm testing the installation prior to deployment. This is a test instance.

Note: This is just what I suspect the problem is... I'm still new to this stack so I'm not 100% certain.

The problem I'm running into is that "sometimes" when I launch an instance it appears that nova-compute tries to attach to a cinder-volume node that is not the node where the volume that was created. Sometimes it gets it all right and the instance launches successfully. Here are some snippets from the log(s).

From nova-compute.log on the target compute node

WARNING os_brick.initiator.connectors.iscsi [req-f5c10838-e3a3-4f55-8008-037ac4a2a369 80f08fd201d04752b983f355829d255b 91a13260197146038d8ec1f031e93673 - default default] Failed to connect to iSCSI portal 10.0.22.3:3260.
WARNING os_brick.initiator.connectors.iscsi [req-f5c10838-e3a3-4f55-8008-037ac4a2a369 80f08fd201d04752b983f355829d255b 91a13260197146038d8ec1f031e93673 - default default] Couldn't find iscsi sessions because iscsiadm err: iscsiadm: No active sessions.

As you can see above it's trying to attach the iSCSI volume to 10.0.22.3. When we inspect the cinder information regarding the volume, it's on a different node.

root@control1:~# cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 10c30683-b3f5-403f-9ab4-f8b6c814af35 | in-use |      | 10   | -           | true     | e48090b2-f715-4f3f-98ec-328646d7d136 |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
root@control1:~# cinder show 10c30683-b3f5-403f-9ab4-f8b6c814af35
+--------------------------------+-------------------------------------------------+
| Property                       | Value                                           |
+--------------------------------+-------------------------------------------------+
| attached_servers               | ['e48090b2-f715-4f3f-98ec-328646d7d136']        |
| attachment_ids                 | ['10c30683-b3f5-403f-9ab4-f8b6c814af35']        |
| availability_zone              | nova                                            |
| bootable                       | true                                            |
| consistencygroup_id            | None                                            |
| created_at                     | 2018-04-18T17:22:13.000000                      |
| description                    |                                                 |
| encrypted                      | False                                           |
| id                             | 10c30683-b3f5-403f-9ab4-f8b6c814af35            |
| metadata                       | attached_mode : rw                              |
| migration_status               | None                                            |
| multiattach                    | False                                           |
| name                           |                                                 |
| os-vol-host-attr:host          | nova5@lvm#LVM                                   |
| os-vol-mig-status-attr:migstat | None                                            |
| os-vol-mig-status-attr:name_id | None                                            |
| os-vol-tenant-attr:tenant_id   | 91a13260197146038d8ec1f031e93673                |
| replication_status             | None                                            |
| size                           | 10                                              |
| snapshot_id                    | None                                            |
| source_volid                   | None                                            |
| status                         | in-use                                          |
| updated_at                     | 2018-04-18T17:22:32.000000                      |
| user_id                        | 80f08fd201d04752b983f355829d255b                |
| volume_image_metadata          | checksum : ee1eca47dc88f4879d8a229cc70a07c6     |
|                                | container_format : bare                         |
|                                | disk_format : qcow2                             |
|                                | image_id : 090d0acd-27c2-45eb-8971-105346a47a9c |
|                                | image_name : cirros                             |
|                                | min_disk : 0                                    |
|                                | min_ram : 0                                     |
|                                | size : 13287936                                 |
| volume_type                    | None                                            |
+--------------------------------+-------------------------------------------------+
root@control1:~# ping nova5
PING nova5 (10.0.22.7) 56(84) bytes of data.
64 bytes from nova5 (10.0.22.7): icmp_seq=1 ttl=64 time=0.636 ms

I'm not sure this is the correct diagnosis once more, I'm looking for some basic help in where to look for trouble shooting this. Or if anyone knows if multiple cinder-volume nodes requires additional configuration not mentioned in the install guides.

Thanks in advance!

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-04-19 14:33:15 -0500

jhawk gravatar image

To answer myself (and leave the problem here for the next guy)

TL;DR my_ip was set incorrectly on one of the nodes

Detals: The problem was a bone-headed mistake on the configuration. It seems -- for the most part -- the cinder statuses will be AoK with the my_ip set incorrectly on one of the nodes. It seems that nova will use this as the connection point for the iSCSI portals though however. What this leads to is a system that looks and functions correctly at times, but when the cinder-scheduler selects the misconfigured node it will run into this error.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-04-18 12:32:15 -0500

Seen: 105 times

Last updated: Apr 18 '18