发起问题

joey-opens 的档案 - activity

2020-02-28 12:39:37 -0500 获得奖牌  受欢迎的问题 (source)
2020-02-28 12:39:37 -0500 获得奖牌  热门的问题 (source)
2020-02-12 10:15:55 -0500 问了问题 action requests is empty

I had a task that I executed for attaching a volume while this task hanging in "attaching" status I looked inside the nova-compute logs and got this error compute: 2020-02-11 07:17:33.481 993 40 ERROR oslo_messaging.rpc.server InstanceActionNotFound: Action for request_id req-28574b98-d69a-49fb-92e4-121c45f6d964 on instance aed4326a-421c-41fc-9bd5-b51778d0010b not found

also for the CLI, the requests have existed but with no details has it should be.

| Action | Request_ID | Message | Start_Time | Updated_At +---------------+------------------------------------------+---------+----------------------------+--------------------------- | create | req-63a3c908-3d6b-441a-b35c-659548b7b77c | - | 2020-02-11T12:16:05.000000 | 2020-02-11T12:16:20.000000 | attach_volume | req-28574b98-d69a-49fb-92e4-121c45f6d964 | - | 2020-02-11T12:17:32.000000 | 2020-02-11T12:17:32.000000 +---------------+------------------------------------------+---------+----------------------------+---------------------------

+---------------+------------------------------------------------------------------+ | Property | Value | +---------------+------------------------------------------------------------------+ | action | attach_volume | | events | [] | | instance_uuid | aed4326a-421c-41fc-9bd5-b51778d0010b | | message | - | | project_id | 7d91af2e9c8c4dc392f61437ad932ba2 | | request_id | req-28574b98-d69a-49fb-92e4-121c45f6d964 | | start_time | 2020-02-11T12:17:32.000000 | | updated_at | 2020-02-11T12:17:32.000000 | | user_id | 487731196ed5d4bdfd5ccb3ea89ad47d13fff3a83b3f49f2a34dc0abf8b943b5 | +---------------+------------------------------------------------------------------+ the events are empty

can anyone help to understand what could be the root cause of that? Thanks.

2019-09-05 09:09:30 -0500 回答问题 Failed to upgrade rocky, all services get http 503

upgrade to rocky remove the assign port type(admin) in keystone that pointer to port 35357,https://opendev.org/openstack/openstack-ansible-os_keystone/commit/0fbba8a708f6de8555c48993b09130cbe3cd5742

where its failed, the module os_projects have var endpoint_type: admin that from what I saw take it from endpoint list in OpenStack (CLI = openstack endpoint list ), so for me set the admin type to port 5000 was work.

2019-09-04 06:58:56 -0500 评论问题 Failed to upgrade rocky, all services get http 503

but still,the playbook running and fail on"oskeystone:Add service project"with no connection to http://10.53.251.36:35357ofcourse that this port is not used anymore following these changes but I trying to understand if its some that can be changed in my configuration or its a bug in openstackAnsible

2019-09-04 06:58:56 -0500 评论问题 Failed to upgrade rocky, all services get http 503

I have a different env with queen version that I have two services for keystone (public{:5000}, admin{:35357}) after rocy they remove the service file from systemd and from what I read with the removal of the 2.0 API(in rocky) keystone does not distinguish between admin and public endpoints.

2019-09-04 02:59:16 -0500 评论问题 Failed to upgrade rocky, all services get http 503

Hey thanks for that, In keystone service its looks like they drop the service admin that pointer to port 35357, I will try to check this it may be related

2019-09-02 15:25:35 -0500 问了问题 Failed to upgrade rocky, all services get http 503

Hey

I starting an upgrade from queens 17.1.10 to rocky 18.1.9, using OpenStack ansible as a deploy tool, follow this upgrade guide "https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/major-upgrades.html" all those steps go well without error till the last one "setup-OpenStack" failed on the first setup configuring keystone, as I saw from this point all endpoints of all the services are unreachable and drop HTTP 503!, the step failed on :

TASK [os_keystone : Add service project] ********************************************************** FAILED - RETRYING: Add service project (5 retries left). FAILED - RETRYING: Add service project (4 retries left). FAILED - RETRYING: Add service project (3 retries left). FAILED - RETRYING: Add service project (2 retries left). FAILED - RETRYING: Add service project (1 retries left). fatal: [node01_keystone_container-097ad5b5 -> localhost]: FAILED! => {"attempts": 5, "changed": false, "extra_data": null, "msg": "Unable to establish connection to http://<ipaddress>:35357/tenants: ('Connection aborted.', error(104, 'Connection reset by peer'))"}

I think something at the start of this task change something in keystone that causes this...unfortunately I didn't success to find it can anyone help me or know about some change in keyston (rocky version) that can be the reason for that?

Thanks!

2019-08-09 04:09:16 -0500 获得奖牌  热门的问题 (source)
2019-08-06 16:12:03 -0500 获得奖牌  受欢迎的问题 (source)
2019-08-06 16:12:03 -0500 获得奖牌  热门的问题 (source)
2019-07-21 11:25:18 -0500 评论问题 issue while detaching volume (FC type) from instance

seems the disk was exported to host are not clean up well. the map of the disk still exists on the host even if he was deleted so when creating a new one he got the same WWID that was assigned to the old volume and from the host side he still in a map, so each task to remove him failed.

2019-07-21 11:25:18 -0500 评论问题 issue while detaching volume (FC type) from instance

Hi Bernd, I didn't find anything suspicious on the log, https://pastebin.com/Dhc0HtRS (attach here).

2019-07-18 09:20:25 -0500 获得奖牌  受欢迎的问题 (source)
2019-07-15 08:46:27 -0500 问了问题 issue while detaching volume (FC type) from instance

Hey all, lately I added a second tier for my cinder backend, an FC storage type. all the setup goes well the service was up and I was able to execute any task related to the capability of the driver. just except detach and apparently it also affected when the instance goes for a reboot. the same main error that shown

"Stdout: u'Jul 15 02:46:50 | /dev/disk/by-id/dm-uuid-mpath-20024f400554505a0: map in use\nJul 15 02:46:50 | failed to remove multipath map /dev/disk/by-id/dm-uuid-mpath-20024f400554505a0\n' Stderr: u'': ProcessExecutionError: Unexpected error while running command. 2019-07-15 02:47:30.904 8111 INFO nova.compute.manager [req-28dce3d5-3412-412e-9bc7-a0be5e921bf9 3a67a05137e5a2e48c975dab3ed0d493c890ae27f5f343f567a59a487a465ce5 7d91af2e9c8c4dc392f61437ad932ba2 - d70a4991744248d9a6733356e668dfc6 d70a4991744248d9a6733356e668dfc6] [instance: 1649960c-dbff-4776-b6a5-c0719af2a130] Successfully reverted task state from reboot_started_hard on failure for instance. 2019-07-15 02:47:30.947 8111 ERROR oslo_messaging.rpc.server [req-28dce3d5-3412-412e-9bc7-a0be5e921bf9 3a67a05137e5a2e48c975dab3ed0d493c890ae27f5f343f567a59a487a465ce5 7d91af2e9c8c4dc392f61437ad932ba2 - d70a4991744248d9a6733356e668dfc6 d70a4991744248d9a6733356e668dfc6] Exception during message handling: ProcessExecutionError: Unexpected error while running command. Command: multipath -f /dev/disk/by-id/dm-uuid-mpath-20024f400554505a0 "

as a workaround, I running "echo 1 > /sys/bus/scsi/drivers/sd/$DEVICE/delete" and then seems he succeeded to run after "multipath -f"

Anyone also interrupts in this issue and can help me here? Thanks.

I running queens 17.1.9 deployed by openstack-ansible

2019-05-28 03:00:46 -0500 获得奖牌  著名问题 (source)
2019-05-27 17:23:14 -0500 问了问题 cinder cannot attach FC volume

Hey,

I have an issue while tried to integrate our 3par20800 to OpenStack environment, I edit the configuration file (cinder.conf) so I will able to start and it works fine, I can see inside cinder log there is output that shows the driver loaded successfully. " 2019-05-26 05:38:02.049 124315 INFO cinder.volume.manager [req-a0ba97f0-f557-42e9-bdc7-561a01cdaba6 - - - - -] Initializing RPC dependent components of volume driver HPE3PARFCDriver (4.0.5) 2019-05-26 05:38:02.506 124315 INFO paramiko.transport [-] Connected (version 2.0, client OpenSSH_6.6.1p1) 2019-05-26 05:38:02.747 124315 INFO paramiko.transport [-] Authentication (password) successful! 2019-05-26 05:38:04.193 124315 INFO cinder.volume.manager [req-a0ba97f0-f557-42e9-bdc7-561a01cdaba6 - - - - -] Driver post RPC initialization completed successfully. "

also, I able to create an empty volume via cinder, but, I had an issue once I trying to attach the volume to an instance. in SAN side all zones are configured I also verify that I can create a host and export a volume too, the os on the guest host identify the disk, but when I try to attach the volume from OpenStack side I saw those error inside the nova-compute.log.

"2019-05-26 03:53:55.447 102619 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 0. 2019-05-26 03:53:57.443 102619 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 1. 2019-05-26 03:53:59.440 102619 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 2. 2019-05-26 03:54:01.445 102619 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 3. 2019-05-26 03:54:03.450 102619 INFO os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not yet found. Will rescan & retry. Try number: 4. 2019-05-26 03:54:05.446 102619 ERROR os_brick.initiator.connectors.fibre_channel [-] Fibre Channel volume device not found. 2019-05-26 03:54:05.446 102619 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'os_brick.initiator.connectors.fibre_channel._wait_for_device_discovery' failed: NoFibreChannelVolumeDeviceFound: Unable to find a Fibre Channel volume device. 2019-05-26 03:54:05.448 102619 ERROR nova.virt.block_device [req-002957eb-4d6d-4ffb-9388-1d414ef702f9 69ba2fbe42c34935a3ad954a21ab5b9f 8ee4d8ed00b542c39906d531db0940af - default default] [instance: c6b7378f-e978-4e80-aadd-6bd49a472c09] Driver failed to attach volume 115439b5-0977-49ad-9ac8-0d40dda3bf62 at /dev/sdb: NoFibreChannelVolumeDeviceFound: Unable to find a Fibre Channel volume device."

the thing is that while the volume in attaching state the 3par volume is export to the compute node ( I also saw when I running "lsblk") but after this error above he back to the available state and the export is gone. Does anyone encourage with that issue or can assist me? I running OpenStack queens(17.1.9) on Ubuntu 16, deploy by OpenStack-ansible. cinder-vol service is on the compute node

2019-05-08 02:18:56 -0500 获得奖牌  热门的问题 (source)
2019-05-07 10:08:22 -0500 获得奖牌  受欢迎的问题 (source)
2019-05-07 04:39:50 -0500 获得奖牌  粉丝
2019-05-05 01:10:22 -0500 评论问题 error in cinder-volume

the cinder-volume service is still up when this is happening, which logs should I look into it that can bring me closer to find the issue? apart from the cinder logs

2019-05-02 09:08:43 -0500 评论问题 how to config cinder as HA a/p

Hey sorry for the misunderstanding, the question is if you are familiar with the host parameter inside the cinder.conf file, if so is it a good way to work with multiple containers with the same host value inside all?

2019-05-02 09:08:43 -0500 问了问题 error in cinder-volume

Hey after upgrade my OpenStack environment to queens (17.1.9) I was starting to see errors in cinder-volume.log logs are here https://pastebin.com/2Nwqmi2c Does anyone know why this occurs and what is affected?

2019-03-27 08:53:45 -0500 获得奖牌  受欢迎的问题 (source)
2019-03-27 08:53:31 -0500 评论问题 how to config cinder as HA a/p

anyway, when changing the cinder host parameter it isn't really cover HA for service but keep the service up in case of a container down. so this state should be good? (any possible impact?) or should I need to separate the service on each container?

2019-03-27 08:53:30 -0500 评论问题 how to config cinder as HA a/p

Thanks for your responses Bernd, I did try to change the "host" parameter in cinder.conf and seems 3 of the containers are pointing to same service and work right(everyone can take and handling requests) but still in my issue sometimes cinder goes down without the actual service on the host is down

2019-03-27 08:52:44 -0500 回答问题 how to config cinder as HA a/p

Thanks for your responses Bernd, I did try to change the "host" parameter in cinder.conf and seems 3 of the containers are pointing to one service and work right ( everyone can take and handling requests ) but still in my issue sometimes cinder goes down without the actual service on the host is down(another issue not for this question). anyway, when changing the cinder host parameter it isn't really cover HA for service but keep the service up in case of a container down. so this state should be good? (any possible impact?) or should I need to separate the service on each container?

2019-03-26 16:30:18 -0500 问了问题 how to config cinder as HA a/p

I have 3 containers that use for me as cinder-volume host and I looking for a solution to avoid when one of will be state down, there is something that anyone uses or support for a tool?