Ask Your Question

D20K's profile - activity

2015-03-11 05:37:59 -0500 received badge  Famous Question (source)
2015-02-12 04:00:46 -0500 received badge  Enlightened (source)
2015-02-12 04:00:46 -0500 received badge  Good Answer (source)
2015-02-02 07:58:51 -0500 received badge  Self-Learner (source)
2015-01-21 20:16:32 -0500 received badge  Famous Question (source)
2014-12-16 13:06:55 -0500 received badge  Famous Question (source)
2014-12-16 13:06:55 -0500 received badge  Notable Question (source)
2014-12-11 02:31:07 -0500 received badge  Notable Question (source)
2014-12-04 14:16:00 -0500 received badge  Nice Answer (source)
2014-11-11 07:28:32 -0500 received badge  Famous Question (source)
2014-11-05 06:14:11 -0500 received badge  Popular Question (source)
2014-11-01 05:35:26 -0500 received badge  Notable Question (source)
2014-11-01 03:35:39 -0500 commented answer Metadata request timed out issue on Icehouse

I love you so much :) Thanks!!!

2014-11-01 02:26:38 -0500 asked a question How to configure instance access to controller node?

Hello, I just installed icehouse on Window 8.1 host and 3 VMware Ubuntu VMs. when I launched instance with cirros and ubuntu 14.04 cloud image, the inastance can't access to metadata server.

managed networks is 10.0.0.0/24 with VMware NAT, instance tunnel network is 10.0.1.0/24 with VMware Host Only.

Instance can access external network (192.168.0.0/24) but cannot access manages networks which metadata server exists (controller node).

How can I configure nova.conf ?

CirrOS instance

$ ifconfig

eth0      Link encap:Ethernet  HWaddr FA:16:3E:F1:AF:72
          inet addr:172.16.0.9  Bcast:172.16.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fef1:af72/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:240 errors:0 dropped:0 overruns:0 frame:0
          TX packets:263 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:32136 (31.3 KiB)  TX bytes:26450 (25.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ ping 10.0.0.11
PING 10.0.0.11 (10.0.0.11): 56 data bytes
^C
--- 10.0.0.11 ping statistics ---
30 packets transmitted, 0 packets received, 100% packet loss

$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1): 56 data bytes
64 bytes from 192.168.0.1: seq=0 ttl=63 time=3.152 ms
64 bytes from 192.168.0.1: seq=1 ttl=63 time=2.407 ms
64 bytes from 192.168.0.1: seq=2 ttl=63 time=1.973 ms
64 bytes from 192.168.0.1: seq=3 ttl=63 time=2.126 ms
^C
--- 192.168.0.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 1.973/2.414/3.152 ms

CirrOS Console log

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 9.87. request failed
failed 2/20: up 12.21. request failed
failed 3/20: up 14.44. request failed
failed 4/20: up 16.67. request failed
failed 5/20: up 18.84. request failed
failed 6/20: up 21.05. request failed
failed 7/20: up 23.26. request failed
failed 8/20: up 25.43. request failed
failed 9/20: up 27.61. request failed
failed 10/20: up 29.80. request failed
failed 11/20: up 32.00. request failed
failed 12/20: up 34.18. request failed
failed 13/20: up 36.36. request failed
failed 14/20: up 38.52. request failed
failed ...
(more)
2014-11-01 02:07:07 -0500 commented question Network configuration on network node

@dbaxps I want to configure controller node connect to network's external ip (192.168.0.21) because when instance launched it cannot connect to metadata server

2014-11-01 01:48:10 -0500 commented question Network configuration on network node

@dbaxps internal 10.0.1.0/24 network is instance tunnel with VMWare Host Only. I just update my question added ovs-vsctl show ( click more)

2014-10-31 07:24:15 -0500 received badge  Popular Question (source)
2014-10-31 06:02:50 -0500 received badge  Nice Question (source)
2014-10-31 04:00:21 -0500 asked a question Network configuration on network node

Hi, in a new installation of ICEhouse on Window8.1 Host and VMware Ubuntu 14.04LTS VMS - 3 node configuration at the point, network node external interface eth2 doesn't work well.

I configure management network with VMware NAT and external network with VMware Bridge like below.

image description

eth0 in all node works well but Network node's eth2 doesn't works.

even I disconnect eth0, eth2 cannot communicate with gw 192.168.0.1

Controller node

root@controller:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0c:29:fc:33:d6
      inet addr:10.0.0.11  Bcast:10.0.0.255  Mask:255.255.255.0
      inet6 addr: fe80::20c:29ff:fefc:33d6/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:608622 errors:0 dropped:0 overruns:0 frame:0
      TX packets:454149 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:166276135 (166.2 MB)  TX bytes:130771362 (130.7 MB)

lo        Link encap:Local Loopback
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:65536  Metric:1
      RX packets:2015193 errors:0 dropped:0 overruns:0 frame:0
      TX packets:2015193 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:667960155 (667.9 MB)  TX bytes:667960155 (667.9 MB)

root@controller:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        *               255.255.255.0   U     1      0        0 eth0

root@controller:~# ping -c 3 network
PING network (10.0.0.21) 56(84) bytes of data.
64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.399 ms
64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=1.22 ms
64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.983 ms

--- network ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.399/0.869/1.225/0.346 ms

root@controller:~# ping -c 3 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=128 time=0.210 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=128 time=0.167 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=128 time=0.204 ms

--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.167/0.193/0.210/0.024 ms

root@controller:~# ping -c 3 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=128 time=2.95 ms
64 ...
(more)
2014-10-29 21:16:45 -0500 received badge  Famous Question (source)
2014-10-29 09:48:20 -0500 received badge  Notable Question (source)
2014-10-29 09:44:11 -0500 received badge  Commentator
2014-10-29 09:44:11 -0500 commented answer ssh public key can't be injected into instance

@dbaxps thanks, I just update question.

2014-10-29 09:42:07 -0500 commented answer ssh public key can't be injected into instance

yes, I have in nova.conf "enabled_apis=...,metadata" already. I launched cirros and ubuntu 14.04 cloud server, both show same metadata erros.

2014-10-29 09:13:40 -0500 commented answer ssh public key can't be injected into instance

all shows well..

2014-10-29 08:36:56 -0500 received badge  Popular Question (source)
2014-10-29 08:32:57 -0500 commented question ssh public key can't be injected into instance

@dbaxps yes, the problem not fixed..

2014-10-29 08:06:46 -0500 commented question ssh public key can't be injected into instance

@dbaxps nova.conf in all nodes added "metadata_host=(IP of Controller)" ?

2014-10-29 05:13:04 -0500 asked a question ssh public key can't be injected into instance

I installed ICEhouse on Window8.1 Host and VMware VMs - 4 node ( Controller, Compute, Network, Block Storage).

But I cannot connect to the instance with ssh private key. So I check instance console log and check like following.

when instance is launched, (I checked console log lists) even though instance connects to http://169.254.169.254/2009-04-04/ins... , the log shows fail message.

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 18.98. request failed
failed 2/20: up 21.51. request failed
failed 3/20: up 23.76. request failed
failed 4/20: up 26.02. request failed
failed 5/20: up 28.28. request failed
failed 6/20: up 30.51. request failed
failed 7/20: up 32.93. request failed
failed 8/20: up 35.16. request failed
failed 9/20: up 37.38. request failed

As I know, when instance is made, nova-api-metadata service can be connected and then ssh public key is injected.

So I checked ssh pulbic key is not injected into instance correctly.

=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnKr+B121EUSFAhTWSljdvK8RjBZfm7FhcKtMq0HpIHukUdx06gwqJIcAPztF2f3PsyvNwunVuTkoehvH+9Dnu/IfXBsESY1LFzrIjtINLxOzh6zYnAt8DQbUmINEQSXykbUWfsX1c+tn6cgkXSX2kVEBv7qRgOHuDGpR+vpj146Gi0= root@cirros
ssh-dss AAAAB3NzaC1kc3MAAACBAKF8eavLiWdZLzr9ODrya0QTvEv+QCh4p58pzNAVduiy5N32m3sSuyLTfa0gHP+b9D0QmbCfQVNXj1uLW/sUMCe5/0leIwLhQOn1B4yAkPQKEXiXTTAepvKBa9Lyotv/0SNyaHICrgBsC8ElK8XR1ArX6c6fyP03O32/+g/g3IBNAAAAFQCWQXaKsZz7qYh7tk8dakZPDzXkuQAAAIEAm0/CIfBy5AhhPiqVbOoBMBYeCofAQ0mAKjv7g8IVWYGosp8/lspqYP6975DBF4wsQq67eTl6KRI6oFDtJdHOcCy0ATcx/f/eoJjM85E16fkz+VfXEcKFDFG/XHJeUfsBjScmC2N0z8m90GTijcO6r88pnG2QCp0cSNXKr7ij7nIAAACAP9c62Q3JZxMYTtdoLhwC0EY85dEBxmaUO2mmDXajTKxNPrPFAHuUO+lInw9O8JAhzEwo3mo7chTesoXWziUQB3uhkhTw3j8JHN5HBBvCJtzcRaisp1NhwwaR9leiBGdF+XicPOo4okzZiCx1pAlVEvRM0PXfMqIvgc0ZvvfjyHw= root@cirros
-----END SSH HOST KEY KEYS-----

So I checked metadata section in admin guide, http://docs.openstack.org/admin-guide... , the guide said in multi-host mode, I should inject metadata_host = 127.0.0.1 in /etc/nova/nova.conf.

Is this correct configuration? and I am not sure the setting should be in control node or compute node.

In network node

$ ip netns exec qrouter-router-id   iptables -t nat -S | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

$ ip netns exec qrouter-router-id  netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      42020/python

$ ps -ef | grep 42020
root     42020     1  0 17:03 ?        00:00:00 /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/166fa743-68e0-41ee-961f-91d039999fa2.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=166fa743-68e0-41ee-961f-91d039999fa2 --state_path=/var/lib/neutron --metadata_port=9697 --verbose --log-file=neutron-ns-metadata-proxy-166fa743-68e0-41ee-961f-91d039999fa2.log --log-dir=/var/log/neutron

In Controller Node

root@controller:~# netstat -lntp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      12893/python

root@controller:~# ps -ef | grep 12893
nova     12893     1  1 22:28 ?        00:00:32 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12899 12893  0 22:28 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12900 12893  0 22:28 ?        00:00:07 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
nova     12907 12893  0 22:28 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --config-file=/etc/nova/nova.conf
2014-10-29 05:01:54 -0500 received badge  Famous Question (source)
2014-10-21 02:44:36 -0500 received badge  Popular Question (source)
2014-10-18 00:48:33 -0500 marked best answer dpkg: error processing package cinder-common (--configure):

In a new installation of ICEhouse on Ubuntu 14.04 Host and VMware VMs - 3 node configuration at the point where install cinder-api and cinder-scheduler in controller, I get this error:

Setting up cinder-common (1:2014.1.2-0ubuntu1) ...
Traceback (most recent call last):
   File "/usr/bin/cinder-manage", line 62, in <module>
      from oslo import messaging
ImportError: cannot import name messaging
dpkg: error processing package cinder-common (--configure):
  subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of cinder-api:
   cinder-api depends on cinder-common (= 1:2014.1.2-0ubuntu1); however:
      Package cinder-common is not configured yet.

dpkg: error processing package cinder-api (--configure):
   dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of cinder-scheduler:No apport report written because the error message indicates its a followup error from a previou    s failure.

 cinder-scheduler depends on cinder-common (= 1:2014.1.2-0ubuntu1); however:
    Package cinder-common is not configured yet.

dpkg: error processing package cinder-scheduler (--configure):
    dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup error from a previous failure.
                                                                                                      Errors were encountered while processing:
 cinder-common
 cinder-api
 cinder-scheduler
 E: Sub-process /usr/bin/dpkg returned an error code (1)

I tried like this:

apt-get remove cinder-common cinder-api cinder-common
apt-get update && apt-get upgrade
apt-get install cinder-api cinder-scheduler

but problem didn't get resolved. Can you tell me how to resolve this problem?

2014-10-18 00:48:33 -0500 received badge  Self-Learner (source)
2014-10-18 00:48:33 -0500 received badge  Teacher (source)
2014-10-15 21:22:02 -0500 received badge  Notable Question (source)
2014-10-13 06:41:03 -0500 received badge  Notable Question (source)
2014-10-13 06:10:10 -0500 received badge  Student (source)
2014-10-13 04:56:18 -0500 answered a question nova.cir.libvirt.driver error in nova boot

I remove all qemu, kvm installed and reinstall nova-compute-kvm. It works well.

to check kvm,qemu installed packages:

dpkg --get-selections | grep qemu dpkg

dpkg--get-selections | grep kvm

remove all kvm,qemu pakages

apt-get install nova-compute-kvm

2014-10-13 02:14:36 -0500 commented question nova.cir.libvirt.driver error in nova boot

In my nova-compute.conf, virt_type is kvm.

2014-10-13 01:52:28 -0500 asked a question nova.cir.libvirt.driver error in nova boot

In a new installation of ICEhouse on Window 8.1 Host and VMware VMs - 3 node configuration at the point where boot nova image, I get this error:

$ nova boot demo-instance1 --image 56e924ad-cd12-4b60-b6a3-a0321c1c047d --flavor 1 --key_name demo-key --security-group default --nic net-id=8112711d-1fd1-440b-a651-1ac157e7a310 (in controller node)

$ cat /var/long/nova/nova-compute.log | grep ERROR (in compute node)

2014-10-13 15:34:04.089 6102 ERROR nova.virt.libvirt.driver [-] [instance: dd91d996-a479-463a-b109-701672573adc] During wait destroy, instance disappeared.
2014-10-13 15:34:23.454 6102 ERROR nova.virt.libvirt.driver [req-8a8d7619-7067-43db-b3e1-4ba5e4d67c71 501900c1dfd4498996ecd0c1e03dda1b a2f4005d8d594260939247dd8e74d208] An error occurred while trying to launch a defined domain with xml: <domain type='kvm'>
2014-10-13 15:34:23.458 6102 ERROR nova.compute.manager [req-8a8d7619-7067-43db-b3e1-4ba5e4d67c71 501900c1dfd4498996ecd0c1e03dda1b a2f4005d8d594260939247dd8e74d208] [instance: db2ee84a-05f0-4e2e-b272-6083f1544b83] Instance failed to spawn
2014-10-13 15:34:24.535 6102 ERROR nova.compute.manager [req-8a8d7619-7067-43db-b3e1-4ba5e4d67c71 501900c1dfd4498996ecd0c1e03dda1b a2f4005d8d594260939247dd8e74d208] [instance: db2ee84a-05f0-4e2e-b272-6083f1544b83] Error: internal error: process exited while connecting to monitor: *** stack smashing detected ***: /usr/bin/qemu-system-x86_64 terminated

Could you advice me how to resolve this problem, plz?

2014-10-13 01:47:48 -0500 marked best answer Nova cannot boot or delete image

I am constructing openstack havana release in ubuntu 12.04LTS.

I am following OpenStack Cloud Computing Cookbook Second Ediotion, but I meet some errors.

when I nova boot or nova delete image it return error status

nova list

ID Name Status Task State Power State Networks

~ Cirros ERROR None NOSTATE

~ myInstance ERROR None NOSTATE

Sorry about I have not enough point to attach images..

So, I check logs in compute node, using following commands,

tail -f /var/log/nova/nova-* | grep -i error

Logs show like this,

2014-10-06 06:41:10.017 1426 ERROR nova.openstack.common.rpc.amqp [req-945070f0-7b03-458e-a0ca-d112735c20b1 32a3bfb103564ee3a77183ee12ea63c7 2b1e0e785e854d61be1e04fc5905ba6b] Exception during message handling
2014-10-06 07:54:50.474 1426 WARNING nova.compute.manager [-] Found 5 in the database and 0 on the hypervisor.

I watch all log files but I can't found particular logs...

I search this error in google but I can't resolve this problem. Could you tell me how to resolve this problem?

2014-10-13 01:45:48 -0500 commented answer dpkg: error processing package cinder-common (--configure):

Well.. try like this: pip install oslo.messaging --ugrade and restart cinder services.

2014-10-13 01:43:36 -0500 commented answer Ntp configuration in nodes

I reinstall ntp and remove all restrict entries, it works well! :) thanks for advice

2014-10-13 01:42:23 -0500 commented answer Ntp configuration in nodes

thanks for advcie :) I reinstall ntp and no configure in controller node and comment all server, add server controller. It works! I don't know why it doesn't work before..

2014-10-13 00:48:05 -0500 answered a question nova list ERROR: HTTPConnectionPool(host='controller', port=8774): Max retries exceeded with url: /v2/c677d3d3c35644579fa8da9b4342e19d/servers/detail (Caused by <class 'socket.error'>: [Errno 111] Connection refused)

pip install olso.rootwrap --ugrade

service nova-api restart

it will be work

2014-10-13 00:38:04 -0500 received badge  Popular Question (source)