发起问题

tonyp12 的档案 - activity

2020-09-02 05:24:40 -0500 获得奖牌  著名问题 (source)
2020-06-30 15:58:04 -0500 获得奖牌  著名问题 (source)
2020-06-30 15:58:04 -0500 获得奖牌  热门的问题 (source)
2020-06-30 15:58:04 -0500 获得奖牌  受欢迎的问题 (source)
2020-04-15 03:39:22 -0500 获得奖牌  热门的问题 (source)
2020-04-15 03:39:22 -0500 获得奖牌  著名问题 (source)
2020-04-15 03:39:22 -0500 获得奖牌  受欢迎的问题 (source)
2019-11-21 11:15:45 -0500 获得奖牌  热门的问题 (source)
2019-11-21 11:15:45 -0500 获得奖牌  著名问题 (source)
2019-10-21 16:19:43 -0500 获得奖牌  好问题 (source)
2019-10-09 10:03:23 -0500 获得奖牌  热门的问题 (source)
2019-09-03 20:32:25 -0500 评论问题 unable to attach volume to instance

Yes I did try anyway... The target gets set up and torn down within about 1 or 2 seconds, so of course log in failed.

2019-09-02 20:58:14 -0500 评论问题 unable to attach volume to instance

hi, thx for your help. I already mentioned about iscsi - it does not get logged in, although the target gets created. iscsi is to the same host. This is a single host/controller/compute combined.

2019-08-29 22:23:52 -0500 获得奖牌  受欢迎的问题 (source)
2019-08-29 02:46:07 -0500 评论问题 unable to attach volume to instance

Sure!

[root@sl-len-01-p ~]# vgs | grep cinder
          cinder-volumes   1  94   0 wz--n- <1.03t 52.48g

Could be the problem? - I was checking df -h :

[root@sl-len-01-p ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  1.3T 1019G  265G  80%
2019-08-28 20:54:30 -0500 评论问题 unable to attach volume to instance

I dont understand it and cannot post full log output here. But it looks like it sets up the iscsi connection, runs some commands then tears it down. It does not mention about trying to log in, or log in failures to the iscsi.

2019-08-28 20:53:37 -0500 评论问题 unable to attach volume to instance

Same messages there in the volume log which say creating target then deleting the target. There are logs in between like "CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes" returned: 0"

2019-08-28 02:38:32 -0500 问了问题 how to ask a question for tripleo? Stuck with upgrade from pike

I am looking to update the openstack Pike I have. I installed it by going through the guide on the tripleo website http://tripleo.org/

My first task is to update the undercloud instance. However the Pike repo is no longer present, so yum update fails:

https://trunk.rdoproject.org/centos7-...: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror.

The Pike repo I used was: https://trunk.rdoproject.org/centos7/current/python2-tripleo-repos-0.0.1-0.20180207160722.93d01b8.el7.centos.noarch.rpm

The above no longer exists. I have checked https://trunk.rdoproject.org/centos7-pike/current-tripleo/ and https://trunk.rdoproject.org/centos7-pike/current/ but cannot find the rpm python2-tripleo*.rpm

For me to upgrade, my first task is to upgrade to the latest within the current version (Pike). I am stuck as the repo is no longer there.

Any ideas how to move forward?

2019-08-27 09:22:03 -0500 评论问题 unable to attach volume to instance

there is volume.log in /var/log/cinder this shows: Creating iscsi_target for volume: volume-73f060fc-e347 attachment_update completed successfully. Terminate volume connection completed successfully Removing iscsi_target: 73f060fc-e347

All for the same volume in question. How to increase verbosity?

2019-08-27 09:18:52 -0500 评论问题 unable to attach volume to instance

volume service list shows all enabled / up: cinder-scheduler cinder-backup cinder-volume

2019-08-27 04:44:36 -0500 问了问题 unable to attach volume to instance

I have a packstack all in one Queens host. It is one host and controller all-in-one and runs iscsi connections to it's own single IP address. For the most part, working well. However, the free memory has been depleted a few times due to load and occasional issues have begun. One of which is the following:

At the moment I have one volume that is unable to attach to an instance. This was previously attached and running. But now it's unable to be reattached.

command openstack server add volume 0f63dea0-d6fa-433d-be7f-dbaaef5423a7 73f060fc-e347-4e08-bb63-b399d261b6c8 does not produce any output so appears successful. But a volume list shows that volume not attached.

So far I have managed to discover the cinder api log with this:

2019-08-27 14:43:39.356 185098 INFO cinder.api.openstack.wsgi [req-4740850b-5da4-435f-9b6e-1a45c6fd6f6c 729c0f68314683dbe285c77db9a1b25af9c16637c32f72e8c11a006a2d5d5310 a3b5c4b6770c4d4dabbde06c55c5d0d1 - 04675d204cd0428394b1db1d89ad0f1f 04675d204cd0428394b1db1d89ad0f1f] GET http://192.168.33.235:8776/v3/a3b5c4b6770c4d4dabbde06c55c5d0d1/volumes/73f060fc-e347-4e08-bb63-b399d261b6c8
2019-08-27 14:43:39.356 185098 DEBUG cinder.api.openstack.wsgi [req-4740850b-5da4-435f-9b6e-1a45c6fd6f6c 729c0f68314683dbe285c77db9a1b25af9c16637c32f72e8c11a006a2d5d5310 a3b5c4b6770c4d4dabbde06c55c5d0d1 - 04675d204cd0428394b1db1d89ad0f1f 04675d204cd0428394b1db1d89ad0f1f] Empty body provided in request get_body /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:718

I Think the key part is Empty body provided in request get_body

The above log is always output to the cinder api log when I am attempting: (openstack) server add volume 0f63dea0-d6fa-433d-be7f-dbaaef5423a7 73f060fc-e347-4e08-bb63-b399d261b6c8

I googled for the openstack volume attach process and found this page: https://docs.openstack.org/cinder/pik...

From what I understand, openstack sets up the iscsi target, saves the chap credentials somewhere and then uses that to login to the target. I tried to attach the volume again and repeatedly ran:

[root@sl-len-01-p cinder]# cinder-rtstool get-targets | wc -l
46

It then counted to 47 and I confirmed with:

cinder-rtstool get-targets | grep 73f060fc | wc -l
1

So looks like the target is being created. The api log does show this although shortly after the log also shows the target being deleted and the get-targets count for that volume goes back to 0

I conclude that iscsi authentication is failing, but I have not seen this specific message in any cinder log. I have not located any iscsi log either. I googled how to enable debug log and found a how-to post for Centos (we use centos) how to start iscsid in debug mode.. I think it will affect running instances so I have avoided it.

In cinder.conf we have: lock_path=/var/lib/cinder/tmp but in that directory we always have a file of zero bytes:

[root@sl-len-01-p cinder]# ls -lah /var/lib/cinder/tmp
total 252K
drwxr-xr-x. 2 cinder cinder   27 Jul  4  2018 .
drwxr-xr-x. 5 cinder cinder 208K Aug 27 15:08 ..
-rw-r--r--. 1 cinder cinder    0 Jul  4  2018 cinder-lioadm

And the iscsi lvm config from cinder.conf:

enabled_backends=lvm

[lvm]
volume_backend_name=lvm
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_ip_address=192.168.33.235
iscsi_helper=lioadm
volume_group=cinder-volumes
volumes_dir=/var/lib/cinder/volumes

Could anyone help me move forward in the right direction? I am a bit stuck at this point.

2019-08-26 04:06:08 -0500 问了问题 power state = no state and cannot attach volume to instance

Unsure what occurred to lead to this condition but at the moment I have the following: - An instance in shutoff state but with "power state" as NOSTATE - no volume attached to the instance

The instance was running and it was noted the instance was no longer running.

I attempted to change the power state by running :

nova reset-state --active 0f63dea0-d6fa-433d-be7f-dbaaef5423a7

But this only changed the status frmo "shutoff" to "active" so it appears that I've created another problem to solve.

I have searched and cannot locate how to reset the power state. What options do we have? Is the only option to manually edit the db? I am trying to avoid doing that if at all possible.

This is the first issue. Once this issue is resolved I think I can re-attach the volume ( fingers crossed :) ).

Looking for guidance on the power state here. Any advice appreciated

2019-05-13 21:56:02 -0500 评论问题 Unable to update cinder volume type property error 400

Any tips for this?

2019-05-08 23:15:35 -0500 评论问题 no process found ( nova-compute / neutron)

I had good results following this https://www.rdoproject.org/install/pa...

give it a go.techrepublic might be out of date,,, OR might be a bug in the git repo you pulled from.

2019-05-08 03:18:24 -0500 评论问题 Problem to install my overcloud with trippleO - No valid host was d

I reciprocate ARatcliff - I had this same problem as you when I went about installing openstack. Can you access the director (undercloud) gui?

From memory, you need to scan your overcloud hosts and then afterwards, tag them so that openstack knows how to treat them.

2019-05-08 03:15:00 -0500 获得奖牌  编辑 (source)
2019-05-08 03:14:48 -0500 问了问题 Unable to update cinder volume type property error 400

I have a volume type configured and it's working well for the most part. I would like to add an extended property to slightly change the behaviour. The volume type is below for reference, with existing properties:

(openstack) volume type show Nimble-JOO
+--------------------+-------------------------------------------------------------------------------------------------------------+
| Field              | Value                                                                                                       |
+--------------------+-------------------------------------------------------------------------------------------------------------+
| access_project_ids | None                                                                                                        |
| description        | None                                                                                                        |
| id                 | 8f403b9a-40f7-4884-a7e7-b32d6b6a0127                                                                        |
| is_public          | True                                                                                                        |
| name               | Nimble-JOO                                                                                                  |
| properties         | nimble:multi-initiator='true', nimble:perfpol-name='VMware ESX 5', volume_backend_name='nimble-cinder-prod' |
| qos_specs_id       | None                                                                                                        |
+--------------------+-------------------------------------------------------------------------------------------------------------+

So what I am doing here is setting the propertly like this:

(openstack) volume type set --property nimble:folder=Openstack 8f403b9a-40f7-4884-a7e7-b32d6b6a0127
Failed to set volume type property: Volume Type is currently in use. (HTTP 400) (Request-ID: req-1affdc29-f034-435c-b8bd-63e1dbbfbeee)
Command Failed: One or more of the operations failed
(openstack)

The error is that the item is in use. In what context is this in use as I do not follow? I have not modified this at all since it was initially set up around a year ago. Back then I dont recall having this error 400 problem.

I've tried an older openstack CLI as well as via Horizon with same issues:

# cinder type-key Nimble-JOO set 'nimble:folder'="Openstack"
ERROR: Volume Type is currently in use. (HTTP 400) (Request-ID: req-a24afa54-ad81-44a4-a5b3-ec555654163c)

How do I go about setting the extended property on this volume type? I guess the type is in use, how do I go about making it unused so I can set the property?

2019-04-02 23:32:45 -0500 获得奖牌  著名问题 (source)
2019-02-04 03:35:45 -0500 获得奖牌  热门的问题 (source)
2019-02-04 03:35:45 -0500 获得奖牌  著名问题 (source)
2018-11-05 05:42:21 -0500 获得奖牌  著名问题 (source)
2018-09-05 02:04:32 -0500 获得奖牌  热门的问题 (source)
2018-09-05 02:04:32 -0500 获得奖牌  受欢迎的问题 (source)
2018-08-13 13:02:55 -0500 获得奖牌  热门的问题 (source)
2018-07-19 08:58:09 -0500 问了问题 What are the steps to use an external dhcp server with vlan or flat provider?

I have a Pike openstack with 1 controller/network node and 2 x compute nodes.

The controller / network node is a VM. The compute nodes are physical.

I have configured a VLAN network and launched some instances but they are not receiving DHCP most of the time. BTW the gre networks and floating ip's to through the controller/network node is working fine but as it's a VM there's a limitation where I'm unable to use VLANs through the VM.

I decided to configure an external dhcp server for the VLAN network. What I find is 1) the external DHCP server offers a DHCP to the instance on openstack, but openstack shows that it has a different IP. This in turn breaks IP connectivity to the instance because the iptables rules do not match the instances IP. I have tried having no dhcp on the openstack network config but I find that openstack still configures an ip there.

How can I move completely to external dhcp for the VLAN network? Seems it's conflicting with openstack somewhere, but i am unsure as to where exactly.

2018-07-18 22:29:09 -0500 获得奖牌  受欢迎的问题 (source)
2018-07-18 06:30:36 -0500 评论问题 No valid host was found during instance migrate

I spent 6 hours looking for ghost logs today lol.

openstack compute service list 8 | nova-compute | overcloud-novacompute-1.company.com | nova | enabled | up

It's back and running again.

2018-07-18 06:28:21 -0500 评论问题 No valid host was found during instance migrate

...is deprecated in Pike, but it works. Another thing which helped me was to launch a VM with a flavour that has all storage values as '0' (swap and disk etc). This successfully launched.

After increasing nova.conf host disk overcommit, live migration works.

2018-07-18 06:26:44 -0500 评论问题 No valid host was found during instance migrate

Sorry, I forgot to add that yes it does.

I have since managed a workaround. (ps this website stops working sometimes, seems like db connection issue). The workaround was to set disk overcommit higher than 1.0. My theory about disk seems correct. BUT the value I changed has a warning that it...

2018-07-18 06:26:44 -0500 获得奖牌  评论员
2018-07-18 01:37:12 -0500 评论问题 No valid host was found during instance migrate

i think this issue might have something to do with the fact that the instance flavours have defined swap values which result in consuming disk space on the host for swap only.

  1. I havent confirmed this
  2. if this is true, there's plenty of disk space free on the hosts, how can I work around?
2018-07-18 01:32:52 -0500 问了问题 No valid host was found during instance migrate

I am not sure what I am missing here. I am trying to migrate instances from one host to another but now stuck at "No valid host was found" after migrating some instances successfully. It seems though nova thinks there are no resources but there are no logs at all pertaining to the destination host.

I have a 3-node set up: 1 x controller and 2 x compute nodes. I have call the compute hosts compute-0 and compute-1 hosts. The compute nodes have shared iscsi storage from cinder. Before live migrating actions, we have instances running on both hosts without issue.

Version is Pike.

I was initially trying to migrate instances using the "openstack server migrate" command, but I was getting errors about there not being enough free disk on the destination. This is due to a bug with that specific command and using "nova live-migration" command instead was able to immediately live migrate. But now I have run into another issue and cannot work through it.

I am checking logs in: nova-scheduler.log nova-api.log nova-conductor.log nova-placement-api.log nova-manage.log nova-rowsflush.log

I am trying to migrate an instance from compute-0 to compute-1. I ran a tail on all of those logs and receive this output below. I cannot see any detail pertaining to compute-1 but the logs state no available hosts. Compute-0 host has the compute service disabled to prevent new VMs from being scheduled there.

I am unable to spawn new instances also.

I am trying to do this with the command: nova live-migration 8b51e704-b2fb-4674-8d82-debe0c9ea9d2 --block-migrate

Log output:

2018-07-18 06:06:37.289 129727 DEBUG nova.api.openstack.wsgi [req-1cc7ff66-7f96-41e1-ad50-ee1b2040e0e2 8581214b931a55344f5d9b39916ac246b0ef3bc441914e5711f846c34d50c731 a1c09ae7084b4fbe9de5d7a17112b4c0 - c1fbcd738b6b4b40a82d82e5e010aa4d c1fbcd738b6b4b40a82d82e5e010aa4d] Calling method '<bound method Versions.index of <nova.api.openstack.compute.versions.Versions object at 0x7f74aaf45e10>>' _process_stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:612
2018-07-18 06:06:37.291 129727 INFO nova.api.openstack.requestlog [req-1cc7ff66-7f96-41e1-ad50-ee1b2040e0e2 8581214b931a55344f5d9b39916ac246b0ef3bc441914e5711f846c34d50c731 a1c09ae7084b4fbe9de5d7a17112b4c0 - c1fbcd738b6b4b40a82d82e5e010aa4d c1fbcd738b6b4b40a82d82e5e010aa4d] 192.168.18.102 "OPTIONS /" status: 200 len: 439 microversion: - time: 0.002323

==> nova-placement-api.log <==
2018-07-18 06:06:37.481 129735 DEBUG nova.api.openstack.placement.requestlog [req-a08d6981-96d0-4f79-aaaf-630e03995fcb 5418126b8be04f8cb00ead0e9714df3b b7fd1b7ff742439f96c8b46d13f3f963 - default default] Starting request: 192.168.18.102 "GET /placement/resource_providers/0d2f7450-c02c-4a12-a29a-54cb34ed07d0/aggregates" __call__ /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38
2018-07-18 06:06:37.503 129735 INFO nova.api.openstack.placement.requestlog [req-a08d6981-96d0-4f79-aaaf-630e03995fcb 5418126b8be04f8cb00ead0e9714df3b b7fd1b7ff742439f96c8b46d13f3f963 - default default] 192.168.18.102 "GET /placement/resource_providers/0d2f7450-c02c-4a12-a29a-54cb34ed07d0/aggregates" status: 200 len: 18 microversion: 1.1
2018-07-18 06:06:37.515 129735 DEBUG nova.api.openstack.placement.requestlog [req-eeaa41ec-44d2-4cdd-b25b-d2af28279861 5418126b8be04f8cb00ead0e9714df3b b7fd1b7ff742439f96c8b46d13f3f963 - default default] Starting request: 192.168.18.102 "GET /placement/resource_providers/0d2f7450-c02c-4a12-a29a-54cb34ed07d0/inventories" __call__ /usr/lib/python2.7/site-packages/nova/api/openstack/placement/requestlog.py:38
2018-07-18 06:06:37.538 129735 INFO nova.api.openstack.placement.requestlog [req-eeaa41ec-44d2-4cdd-b25b-d2af28279861 5418126b8be04f8cb00ead0e9714df3b b7fd1b7ff742439f96c8b46d13f3f963 - default default] 192.168.18.102 "GET /placement/resource_providers/0d2f7450-c02c-4a12-a29a-54cb34ed07d0/inventories" status: 200 len: 406 microversion: 1.0

==> nova-api.log <==
2018-07-18 06:06:37.951 4617 INFO nova.metadata.wsgi.server [-] 192.168 ...
(more)
2018-07-09 08:34:34 -0500 获得奖牌  受欢迎的问题 (source)
2018-07-06 06:22:34 -0500 评论回答 How to boot windows instance from ISO?

virt-install command not found on centos7 so this doesn't work. I guess some extra packages are required to be installed. Looking at the guide that is quite similar to what I had achieved, except I only created the instance using openstack cli and the instant loads using the ISO's already uploaded.

2018-07-06 06:18:57 -0500 回答问题 How to boot windows instance from ISO?

I solved this. I managed to boot an instance with 2 x CDROM devices and 1 x empty volume. = problem solved.

2018-07-06 03:21:29 -0500 问了问题 How to boot windows instance from ISO?

My goal is that I want to run a Windows Server instance in openstack and I'm confused as to how to achieve this.

I have a Centos7 server running Queens all-in-one. I have copied the Windows Server ISO to this server as well as a separate ISO which contains the virtio drivers.

My problem is that if I do manage to boot an instance from the Windows ISO, there's no way to perform an installation to the empty volume because I need to install drivers (from another ISO) to allow the instance to see the storage volume.

I can't see any way of attaching 2 CDROM devices into the instance at boot time. I've tried making a volume out of the windows ISO and booting from the volume with the drivers ISO attached but the volume does not boot the instance and is stuck at the bios screen.

Any suggestions on this?

I've even tried making a new ISO from the two separate ISOs using centos CLI tools but I couldnt find a good source of info on that so I ended up with a non-bootable ISO.

Can we hot-plug CDROM devices? So after booting the windows installer, can I detach and attach the drivers? Then after driver install, detach and attach the windows ISO once more?