Ask Your Question

Am Ab's profile - activity

2019-10-04 11:52:14 -0500 received badge  Famous Question (source)
2019-09-11 03:33:46 -0500 received badge  Notable Question (source)
2019-09-11 03:33:46 -0500 received badge  Famous Question (source)
2019-04-29 14:53:26 -0500 received badge  Notable Question (source)
2019-04-29 14:53:26 -0500 received badge  Famous Question (source)
2019-02-04 07:09:33 -0500 commented question Can't migrate instances with enabled "Configdrive" to another compute nodes.

The same results happens: openstack server migrate 6a79b258-f3be-4981-995e-6ec27704729d --wait It change the "OS-EXT-STS:vm_state" from "stopped" to "active" and "powerstate" was/is Shutdown.

2019-02-04 04:35:19 -0500 commented question Can't migrate instances with enabled "Configdrive" to another compute nodes.

I'm not sure about the cold migration procedure. I've stopped the instance and then select "migrate" in admin dashboard, it changed the VM state to "Active" and power state was "shut off" and nothing has migrated. Also I can't start that as it is in "Active" state.

2019-02-04 03:10:29 -0500 commented question Can't migrate instances with enabled "Configdrive" to another compute nodes.

On the other hand is it possible to migrate (not live) them . I mean when instances are stopped or the compute host has failed.

2019-02-04 00:45:31 -0500 asked a question Can't migrate instances with enabled "Configdrive" to another compute nodes.

Hello,

I use San Storage as cinder backend and can live migrate instances WITHOUT "Configdrive" to another comoute node but when configdrive is enabled I get the following error:

compute01 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks. (HTTP 400) (Request-ID: req-affd56a4-0c80-4b4d-b8b9-00f45a2cf819)

If I add --block-migration flag to live migration command there will be no error but instance won't migrate.

2019-01-28 13:18:50 -0500 received badge  Popular Question (source)
2019-01-28 10:07:54 -0500 commented answer live migration - CPU doesn't have compatibility

Thanks. That works, but what about the VM's with config drive? instances with enabled config drive with fail with the following error: compute01 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks. (HTTP 400) (Request-ID: req

2019-01-28 01:34:05 -0500 asked a question live migration - CPU doesn't have compatibility

Hello,

Two servers and CPU models are exactly identical but only spec_ctrl, intel_stibp flags are enabled on one of them. When I try to Live Migrate an instance I get the following error:

#openstack server migrate 68d23265-93f1-486e-a4a0-b1b0ac0f35ba --live compute2 Unacceptable CPU info: CPU doesn't have compatibility.

0

Refer to http://libvirt.org/html/libvirt-libvi... (HTTP 400) (Request-ID: req-c45afe07-b557-4bd5-abff-0ba5a56bb7c0)

2019-01-24 12:02:24 -0500 received badge  Famous Question (source)
2019-01-21 14:05:28 -0500 received badge  Famous Question (source)
2019-01-20 12:39:01 -0500 received badge  Notable Question (source)
2019-01-13 12:25:37 -0500 received badge  Popular Question (source)
2019-01-11 12:28:18 -0500 received badge  Notable Question (source)
2019-01-10 09:16:12 -0500 received badge  Notable Question (source)
2019-01-10 03:18:05 -0500 received badge  Popular Question (source)
2019-01-10 03:17:42 -0500 commented answer change default security group rules

That's right.

2019-01-10 03:16:49 -0500 received badge  Supporter (source)
2019-01-10 01:11:21 -0500 received badge  Popular Question (source)
2019-01-09 10:08:48 -0500 commented answer change default security group rules

and it will be applicable for future projects? as you know in a new project there will be only default sg with specific rules. I want this default sg to be chabged for future projects.

2019-01-09 06:16:47 -0500 asked a question How find openstack floating ip pool list

Hello,

This is the error when I want to find floating ip pool name:

openstack floating ip pool list

Floating ip pool operations are only available for Compute v2 network.

2019-01-09 06:13:25 -0500 asked a question change default security group rules

Hello,

Is it possible change the default security group rules?

2019-01-07 08:47:10 -0500 received badge  Notable Question (source)
2019-01-07 08:47:10 -0500 received badge  Famous Question (source)
2019-01-05 00:37:12 -0500 commented question How to use Nova or download instances with no controller

It depends on backend storage type you are using. If it is located on external cinder storage you there is nothing to in compute nodes, just find them and move over. First of all you should find the disk files.

2018-12-29 16:28:08 -0500 received badge  Famous Question (source)
2018-12-26 06:13:33 -0500 received badge  Popular Question (source)
2018-12-26 05:37:58 -0500 commented question Neutron metadata agent is not working - 500 Internal Server Error

Right! How to follow up " [Errno 111] ECONNREFUSED" error?

2018-12-26 03:55:27 -0500 commented question Neutron metadata agent is not working - 500 Internal Server Error

I have already changed the nova_metadata_api in metadata_agent.ini and restarted neutron and nova services but the problem still persists.

2018-12-26 02:37:44 -0500 commented question Neutron metadata agent is not working - 500 Internal Server Error

Thanks. I guess the root cause but it is not solved yet. I use 2 IP ranges, 1 for management and 1 for public traffic. the control node has both IP's but in config files I used mgmt IP. in cirros http:<public-ip>:8775 is fine but failed on http:<private>:8775. Which config files should be updated?

2018-12-25 06:47:50 -0500 commented question Neutron metadata agent is not working - 500 Internal Server Error

I've edited original question and attached the log

2018-12-25 05:28:26 -0500 asked a question Neutron metadata agent is not working - 500 Internal Server Error

Hello,

I've a packstack installation with 3 nodes (1 control + 1 compute + 1network ), but metadata proxy is not working. here is the test results:

$ curl -v http://169.254.169.254

GET / HTTP/1.1 User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 OpenSSL/1.0.0j zlib/1.2.6 Host: 169.254.169.254 Accept: /

< HTTP/1.1 500 Internal Server Error < Content-Length: 207 < Content-Type: text/html; charset=UTF-8 < Date: Tue, 25 Dec 2018 10:13:17 GMT < <html> <head> <title>500 Internal Server Error</title> </head> <body>

  • Also tcpdump result on network node:

13:40:15.006881 IP 169.254.169.254.http > 10.20.30.8.47759: Flags [P.], seq 1:344, ack 143, win 227, options [nop,nop,TS val 1290763920 ecr 2367402], length 343: HTTP: HTTP/1.1 500 Internal Server Error

  • The nova_metadata_ip=<control-node-ip> and metadata_proxy_shared_secret seems to be OK.

  • Here is the metadata-agent.log:

2018-12-25 10:57:28.640 2121 INFO eventlet.wsgi.server [-] 10.20.30.8,<local> "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 500 len: 362 time: 0.2060108 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent [-] Unexpected error.: error: [Errno 111] ECONNREFUSED 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent Traceback (most recent call last): 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/agent.py", line 91, in __call__ 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent return self._proxy_request(instance_id, tenant_id, req) 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/neutron/agent/metadata/agent.py", line 201, in _proxy_request 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent body=req.body) 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1621, in request 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1363, in _request 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent (response, content) = self._conn_request(conn, request_uri, method, body, headers) 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1284, in _conn_request 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent conn.connect() 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 934, in connect 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent raise socket.error, msg 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent error: [Errno 111] ECONNREFUSED 2018-12-25 10:57:30.876 2121 ERROR neutron.agent.metadata.agent

2018-12-20 01:21:10 -0500 marked best answer Hypervisor summary shows incorrect total storage

Hello, I'm using a SAN storage as cinder backend and when create instance I can confirm the root disk will be created on SAN but Nova thinks it is created on Compute nodes. I found the disk usage in "Hypervisors" summary option in dashboard.

2018-12-20 01:21:10 -0500 received badge  Scholar (source)
2018-12-19 06:49:54 -0500 received badge  Notable Question (source)
2018-12-19 05:19:50 -0500 received badge  Commentator
2018-12-19 05:19:50 -0500 commented answer Hypervisor summary shows incorrect total storage

Thanks, Got it.

2018-12-19 00:12:16 -0500 received badge  Popular Question (source)
2018-12-18 13:13:17 -0500 commented answer Hypervisor summary shows incorrect total storage

But when I create an instance, the disk resides on San, I see them on SAN. The hypervisor summery will be increased as well. It is confusing...

2018-12-18 10:53:49 -0500 commented answer Hypervisor summary shows incorrect total storage

they are default openstack flavors, m1.small, ... medium, etc.