Ask Your Question

dwyaneini's profile - activity

2015-03-16 23:51:10 -0500 received badge  Self-Learner (source)
2015-01-19 20:20:47 -0500 received badge  Famous Question (source)
2015-01-04 22:44:40 -0500 received badge  Notable Question (source)
2015-01-04 20:23:57 -0500 received badge  Popular Question (source)
2015-01-04 19:50:05 -0500 commented answer Can't access to VM (already worked fine for 5 months)

Existing instances mounted with volumes keep "shutoff" status. When I attempt to start instance, it turns to be "Error" status. But When I try to click "Soft Reboot Instances" button, It shows error: "You are not allowed to soft reboot the instance."

2015-01-04 19:49:28 -0500 commented answer Can't access to VM (already worked fine for 5 months)

Nova service-list and neutron agent-list both return all services or agents, working fine.

2015-01-04 01:16:16 -0500 asked a question Can't access to VM (already worked fine for 5 months)

Our OpenStack system has already worked fine at least 5 months. We deploy openstack icehouse version with three-node architecture (controller node, network node, and compute node1) on Debian wheezy.I use ml2 plugin and gre type driver.

I didn''t do any tiny configuration changes. When I attempt to login system through dashboard, it prompts me the following errors.

ConnectionError at /project/
HTTPConnectionPool(host='hostIp', port=8774): Max retries exceeded with url: /v2/554ca2c2830943819cb85ba45e059974/extensions (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
Exception Location: /usr/lib/python2.7/dist-packages/requests/adapters.py in send, line 378

Then through Google and OpenStack Ask, got two useful references: https://ask.openstack.org/en/question... & https://ask.openstack.org/en/question... . I've provided sufficient resources for VM, but found that the nova-api service on Controller Node didn't work. By analyzing the relevant log files and upgrading oslo.rootwrap, I could successfully enter into console by dashboard.

But when I start existing VMs, system prompts "failed to start instance". All Volumes turns to error state. The log of nova-compute service on Compute Node provides some errors:

2015-01-04 14:47:32.461 16753 TRACE nova.compute.manager [instance: 2246dc26-5b9e-4883-be57-aac37c8b5eb9]
2015-01-04 14:47:32.774 16753 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-5a36c7ac-780a-4989-8539-161151a07241)
2015-01-04 14:47:32.774 16753 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-01-04 14:47:32.774 16753 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
...
2015-01-04 14:47:32.774 16753 TRACE oslo.messaging.rpc.dispatcher     raise exceptions.from_response(resp, body)
2015-01-04 14:47:32.774 16753 TRACE oslo.messaging.rpc.dispatcher ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-5a36c7ac-780a-4989-8539-161151a07241)
2015-01-04 14:47:32.774 16753 TRACE oslo.messaging.rpc.dispatcher
2015-01-04 14:47:32.778 16753 ERROR oslo.messaging._drivers.common [-] Returning exception The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-5a36c7ac-780a-4989-8539-161151a07241) to caller
2015-01-04 14:47:32.779 16753 ERROR oslo.messaging._drivers.common [-] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n    incoming.message))\n', '  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n    result = getattr(endpoint, method)(ctxt, **new_args)\n', '  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, in decorated_function\n    return function(self, context, *args, **kwargs)\n', '  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped\n    payload)\n', '  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n    six.reraise(self.type_, self.value, self ...
(more)
2014-12-30 07:46:56 -0500 received badge  Notable Question (source)
2014-12-30 07:46:56 -0500 received badge  Famous Question (source)
2014-11-26 13:13:11 -0500 received badge  Famous Question (source)
2014-09-04 09:47:39 -0500 received badge  Necromancer (source)
2014-09-04 09:47:39 -0500 received badge  Teacher (source)
2014-09-04 09:26:35 -0500 received badge  Nice Question (source)
2014-08-25 22:20:54 -0500 commented question Plugin ml2 configuration for neutron on icehouse, debian wheezy

@arthur.lutz So, you'd better check your configuration carefully. Then start your services, you can use command "service SERVICE_NAME status" to see the running status of service.

2014-08-25 22:19:12 -0500 commented question Plugin ml2 configuration for neutron on icehouse, debian wheezy

@arthur.lutz first, you can search log files of neutron-relevant services for any useful relative information. If there's no, you'd better recheck your configuration of neutron-relevant services. According to my experiences, any tiny change of configurations may lead to start abnormally.

2014-08-25 10:05:25 -0500 received badge  Good Question (source)
2014-08-11 20:40:44 -0500 received badge  Famous Question (source)
2014-08-07 00:25:17 -0500 received badge  Famous Question (source)
2014-07-24 19:21:53 -0500 received badge  Famous Question (source)
2014-07-22 01:01:10 -0500 received badge  Commentator
2014-07-22 01:01:10 -0500 commented answer Hypervisor Summary confused

To overcommit CPU and RAM on compute nodes, the virtual cores can up to 16 per physical core, RAM allocation can up to 1.5. You can refer as to openstack-ops-mannual.

2014-07-22 00:55:40 -0500 answered a question Hypervisor Summary confused

You can modify the disk storage path in /etc/nova/nova.conf, "state_path". By default, the path is /var/lib/nova. Maybe the volume of this partition is not enough. You'd better to copy your nova resources to new path, and change corresponding author and group.

2014-07-21 01:34:06 -0500 commented answer ovs_neutron_agent out of sync with plugin, agent tunnel out of sync with plugin

Thanks for your response, gkeller2. Through reading the source code of this part, I agree with you. When the agent first boots up, it must execute the code "Agent out of sync with plugin!", then set the state to "false", and do synchronize. Firstly, I make a mistake that misunderstand this info show up every time

2014-07-21 00:34:10 -0500 received badge  Notable Question (source)
2014-07-18 07:16:59 -0500 received badge  Notable Question (source)
2014-07-18 05:03:55 -0500 received badge  Popular Question (source)
2014-07-17 02:43:29 -0500 commented answer glance-cache-prefetcher displays errors related to storage backends not configured

Actually I uncomment some stores, the other unnecessary commented stores still produce warning in log. Since they are harmless warnings, I can ignore them.

2014-07-17 01:13:42 -0500 asked a question ovs_neutron_agent out of sync with plugin, agent tunnel out of sync with plugin

I'm deploying openstack icehouse version with three-node architecture (controller node, network node, and compute node1) on Debian wheezy.I use ml2 plugin and gre type driver.

Now, I notice some exceptions information in both the ovs-agent.log of network node and compute node.

2014-07-16 01:55:37.756 29090 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-185a38a4-7c00-4af6-ba83-b1872eb34bbd None] Agent initialized successfully, now running...
2014-07-16 01:55:37.762 29090 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-185a38a4-7c00-4af6-ba83-b1872eb34bbd None] Agent out of sync with plugin!
2014-07-16 01:55:37.763 29090 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-185a38a4-7c00-4af6-ba83-b1872eb34bbd None] Agent tunnel out of sync with plugin!
2014-07-16 01:55:37.778 29090 INFO neutron.openstack.common.rpc.common [req-185a38a4-7c00-4af6-ba83-b1872eb34bbd None] Connected to AMQP server on 192.168.0.11:5672

From http://openstack.redhat.com/forum/dis... , & https://lists.launchpad.net/openstack... . the first link prompts to add tcp connection in iptables, but I disable firewall rules. So this does not work.

Also, I refer as to source code of neutron ( https://github.com/openstack/neutron/... ), but have no idea.

I just want to understand why this information exists and then eliminate the information. Thanks in advance.

2014-07-16 05:04:55 -0500 received badge  Notable Question (source)
2014-07-16 01:14:33 -0500 commented question glance-cache-prefetcher displays errors related to storage backends not configured

I have the same errors and warnings with you. Have you already solved it?

2014-07-15 02:52:16 -0500 received badge  Popular Question (source)
2014-07-13 20:04:09 -0500 commented question nova-novncproxy service fail to start

@smaffulli When I have this problem, I search on this site and Googling, but their suggestions do not work for me. So I come here to ask my question for solving my puzzle.

2014-07-13 19:59:56 -0500 commented question nova-novncproxy service fail to start

@HoangDo I check my configurations in detail that there's no any misconfiguration. At last, I update the apt packages and reinstall nova-novncproxy. Then nova-novncproxy works fine.

2014-07-13 06:08:59 -0500 received badge  Popular Question (source)
2014-07-11 03:35:46 -0500 received badge  Popular Question (source)
2014-07-11 03:34:40 -0500 asked a question nova-novncproxy service fail to start

I'm deploying openstack icehouse version with three-node architecture (controller node, network node, and compute node1) on Debian wheezy.

I've launched an instance. Now, I am trying to access instance by vnc console. Through multiple googling, I find that nova-novncproxy do not work. I am sure that nova-novncproxy has been installed. After starting nova-novncproxy service, it prompts ok, but actually nova-novncproxy fail to start. And I don't have /var/log/upstart/nova-novncproxy.log file, what's the reason ?? There's no relative log about novncproxy.

It's so frustrated. Could anyone provide some suggestions? thanks a lot.

2014-07-10 08:23:24 -0500 answered a question libvirtError: internal error Cannot find suitable CPU model for given data

you can set

libvirt_cpu_mode=host-passthrough

in nova.conf. It helps me to solve this problem. Hope this will help you, too.

2014-07-10 01:54:21 -0500 answered a question neutron-plugin-openvswitch-agent fail to start, with ml2 plugin and gre type

Sorry to ask this simple and stupid question, it's just due to a word spelling error. I have solved it.

2014-07-09 20:50:38 -0500 asked a question neutron-plugin-openvswitch-agent fail to start, with ml2 plugin and gre type

I'm deploying openstack icehouse version with three-node architecture (controller node, network node, and compute node1) on Debian wheezy.

I use ml2 plugin and gre type driver. When I finished all configurations and tried to start several agents about neutron, neutron-l3-agent, dhcp-agent, and metadata-agent all succeed to start. But neutron-plugin-openvswitch-agent failed to start. It prompts:

 Restarting Openstack Neutron OpenVSwitch Plugin Agent with ml2 plugin: neutron-openvswitch-agentstart-stop-daemon: warning: failed to kill 7070: No such process

No useful logs can be used. And I have already used command "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" to check whether any configuration error. I find nothing configuration errors.

Ever, I succeed to start all neutorn services by using ovs plugin and gre type driver.

Any suggestions? Thanks for all.

2014-07-07 20:40:15 -0500 answered a question Connection to neutron failed: Maximum attempts reached

I also get the failure. Make sure that all services about neutron are running.