Ask Your Question

Gonzo's profile - activity

2017-11-22 09:30:06 -0500 answered a question Openstack O版本 集群 添加计算节点报错 但是compute1可以正常使用 我吧compute1 关闭了 然后剩下compute2r报错 所有服务全部起来

I have the same problem in OpenStack Pike. Everytime I try to create one instance I get the same error:

.... ERROR nova.image.glance [req-17ea432a-e3d7-4951-887a-26caf94dca87 be97e12b9ac343478a217a2557ed8932 13d14265860a45b6beb53f5439c1fd4e - default default] Error writing to /var/lib/nova/instances/_base/8ec2db2ad5cde6229cb4590aa66b9bdc9dfd2b38.part: 'NoneType' object is not iterable .... ERROR nova.compute.manager [req-17ea432a-e3d7-4951-887a-26caf94dca87 be97e12b9ac343478a217a2557ed8932 13d14265860a45b6beb53f5439c1fd4e - default default] [instance: da9cb8f7-77e1-42f7-ad52-f8463f7dc077] Instance failed to spawn: TypeError: 'NoneType' object is not iterable ...

and /var/lib/nova/instances/_base dir on node1 have the correct permissions:

pwd; ls -l

/var/lib/nova/instances

...

drwxr-xr-x. 2 nova nova 232 Nov 22 16:12 _base

pwd; ls -l

/var/lib/nova/instances/_base

total 44585288

-rw-r--r--. 1 qemu qemu 8589934592 Nov 22 16:04 7a1ec8920cd3b1a7a0ec548228ca6f29e650f3f0

What is going on?

2017-11-22 09:19:06 -0500 received badge  Supporter (source)
2017-01-11 15:12:18 -0500 received badge  Famous Question (source)
2016-02-10 03:34:54 -0500 received badge  Notable Question (source)
2016-02-10 03:34:54 -0500 received badge  Popular Question (source)
2016-02-04 04:53:34 -0500 asked a question CPU at 100% after instance snapshot with nova image-create

Hi All,

I am suffering one weird behavior with the latest version of nova (liberty). After take one (live) snapshot to my instances with: nova image-create

I can see that all the instances appears wit the CPU's at 100% in htop. This is obviously not true, because with 'top' I can see that the servers are ok. It is not one 'htop' issue, because this happens in all the instances despite its version. It happens in Centos 7, Debian 8, etc.

It is one very annoying issue because you can not trust what htop is saying and I have to restart the instance to make the CPU's to show properly.

Somebody knows what is happening here and how to solve this?

Regards,

2015-11-19 17:24:11 -0500 received badge  Famous Question (source)
2015-10-12 17:07:24 -0500 received badge  Popular Question (source)
2015-10-12 17:07:24 -0500 received badge  Notable Question (source)
2015-10-05 11:37:16 -0500 received badge  Famous Question (source)
2015-10-05 11:37:16 -0500 received badge  Notable Question (source)
2015-09-14 11:28:59 -0500 commented question Dhcp and metadata not working

I appreciate you help, but I don't see hot the link you suggest could help to me to solve this issue. How could I activate dnmasq logs in network node?

2015-09-14 10:44:06 -0500 asked a question Dhcp and metadata not working

Hi All,

I have one basic configuration with the latest version (kilo) in Centos7 exactly like the config doc says with one node for compute and network and one compute node (node1) with gre network.

The problem I face is that when I create one instance with one tenant network, dhcp is not working and giving it the ip. If I launch one cirros image I always see this error:

Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
Usage: /sbin/cirros-dhcpc <up|down>
No lease, failing
WARN: /etc/rc3.d/S40-network failed
cirros-ds 'net' up at 183.22
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 183.26. request failed
failed 2/20: up 185.60. request failed
failed 3/20: up 187.62. request failed

These are my subnets:

# neutron subnet-list
+--------------------------------------+--------------+----------------+----------------------------------------------------+
| id                                   | name         | cidr           | allocation_pools                                   |
+--------------------------------------+--------------+----------------+----------------------------------------------------+
| 332d11b9-82f3-46c3-b5e5-965ef41c51dc | priv-subnet  | 10.200.10.0/24 | {"start": "10.200.10.2", "end": "10.200.10.254"}   |
| f2306e8e-d063-4fe0-a905-03f542f175f8 | local-subnet | 192.168.1.0/24 | {"start": "192.168.1.110", "end": "192.168.1.140"} |
+--------------------------------------+--------------+----------------+----------------------------------------------------+

# neutron subnet-show 332d11b9-82f3-46c3-b5e5-965ef41c51dc
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "10.200.10.2", "end": "10.200.10.254"} |
| cidr              | 10.200.10.0/24                                   |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 10.200.10.1                                      |
| host_routes       |                                                  |
| id                | 332d11b9-82f3-46c3-b5e5-965ef41c51dc             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | priv-subnet                                      |
| network_id        | 25603794-1a5b-4131-aa69-2c9feeaa21a1             |
| subnetpool_id     |                                                  |
| tenant_id         | 9a3f25a506a14e36829f210531226d38                 |
+-------------------+--------------------------------------------------+

dnsmasq is running and all the neutron services like dhcp-agent are running. So what is the issue here?:

# ps aux | grep dnsmasq
nobody    4218  0.0  0.0  15524   772 ?        S    17:29   0:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap5dd22be8-c6 --except-interface=lo --pid-file=/var/lib/neutron/dhcp/25603794-1a5b-4131-aa69-2c9feeaa21a1/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/25603794-1a5b-4131-aa69-2c9feeaa21a1/host --addn-hosts=/var/lib/neutron/dhcp/25603794-1a5b-4131-aa69-2c9feeaa21a1/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/25603794-1a5b-4131-aa69-2c9feeaa21a1/opts --dhcp-leasefile=/var/lib/neutron/dhcp/25603794-1a5b-4131-aa69-2c9feeaa21a1/leases --dhcp-range=set:tag0,10.200.10.0,static,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq-neutron.conf --server=10.200.10.2 --server=192.168.1.17 --server=8.8.8.8 --domain=casadebertayguta.org
root      5667  0.0  0.0 112640   952 pts/0    S+   17:39   0:00 grep --color=auto dnsmasq

# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 0739266e-0abc-4840-ba25-565c3130b26a | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| 7428183f-9182-45df-83bf-f1f94e2358a1 | Open vSwitch agent | node1      | :-)   | True           | neutron-openvswitch-agent |
| 9054e6f0-9fa0-4b7f-815a-3fe645319379 | Open vSwitch agent | controller | :-)   | True           | neutron-openvswitch-agent |
| a49e5db7-ef16-4051-b139-4e1cccee928a | L3 agent           | controller | :-)   | True           | neutron-l3-agent          |
| cee97d0e-0039-4d9b-abea-1f5c6fc21605 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

I have done one previous search here and I see more problems like this but no solutions... Any help will be much appreciated.

Thank you and regards,

2015-09-08 06:37:29 -0500 received badge  Scholar (source)
2015-09-08 06:10:06 -0500 answered a question Impossible to configure glance images in one NFS mount share - SOLVED

Well I think I have resolved it seting selinux to permisive. Now I can mount glance in one nfs share

2015-09-08 06:09:24 -0500 received badge  Popular Question (source)
2015-09-03 09:17:08 -0500 asked a question Impossible to configure glance images in one NFS mount share - SOLVED

Hi,

I am trying to configure /var/lib/glance/images/ in one NFS share. I have configured /etc/glance/glance-api.conf with filesystem_store_datadir = /var/lib/glance/images/ and I have mounted the NFS share in the glance server in /var/lib/glance/images/ with ALL permissions and I always get one:

2015-09-03 11:20:33.899 3787 INFO glance_store._drivers.filesystem [-] Directory to write image files does not exist (/var/lib/glance/images/). Creating.
2015-09-03 11:20:33.901 3787 ERROR glance_store._drivers.filesystem [-] Unable to create datadir: /var/lib/glance/images/
2015-09-03 11:20:33.904 3787 WARNING glance_store.driver [-] Failed to configure store correctly: Store filesystem could not be configured correctly. Reason: Unable to create datadir: /var/lib/glance/images/ Disabling add method.

Please believe me when I say that I have set all permissions to everybody:

# ll /var/lib/glance/
 total 0
 drwxrwxrwx. 2 glance glance 49 sep  3 15:25 images

# ll /var/lib/ | grep glance
drwxrwxrwx. 3 glance   glance     39 sep  3 09:59 glance

In fact I have configured glance system user to login with him and if I become glance user with "su glance -" I can write there without problems. If /var/lib/glance/images/ resides in the local filesystem it works, but if I change /var/lib/glance/images/ no matter to nfs or cifs (I have tried with cifs too), I always get the error that it is unable to find and create /var/lib/glance/images/ when it really exist and it is there with permissions to write to everybody.

Somebody have suffered this before? I really can not find why is happening this.

I have ensured that /var/lib/glance/images/ NFS is mounted on boot before starting glance modifying the glance related systemd units:

/usr/lib/systemd/system/openstack-glance-api.service
/usr/lib/systemd/system/openstack-glance-registry.service
/usr/lib/systemd/system/openstack-glance-scrubber.service

...with one "RequiresMountsFor=/var/lib/glance/images" but it does not work.

Any help will be really appreciated...

2015-07-28 03:36:29 -0500 commented answer how to assign public ip address to instance interface directly by using NEUTRON

Hi Keky,

I have the same necessity. Could you explain how you achieved it? I have been reading many doc about creating one VLAN but it looks complex and seems that is needed one special switch Could be this achieved also with one FLAT network? Could somebody point to me to some doc or resource? Thk

2015-07-17 05:45:53 -0500 received badge  Enthusiast
2015-06-09 09:28:18 -0500 commented answer Can not launch first instance on kilo after a fresh install

wait. I have found that my default "/etc/neutron/plugins/ml2/ml2_conf.ini" template did not contain the [ovs] section. However I have added it at the end. Like this:

[ovs]
local_ip = 192.168.2.100
bridge_mappings = external:br-ex

[agent]
tunnel_types = gre

Is that right?

2015-06-09 09:13:53 -0500 answered a question Can not launch first instance on kilo after a fresh install

Thank you for your responses. If really the "Unexpected vif_type=binding_failed" error is caused by one neutron issue, I don't find where it is.

If I list neutron agents or nova services from the controller, everything looks fine:

~# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 22532bd3-506a-4b06-b8d8-d793723d2b19 | Open vSwitch agent | node1      | :-)   | True           | neutron-openvswitch-agent |
| 513698cd-7325-4e49-a82f-34c77987bc5e | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| 5ab02c59-bddc-47e4-92a5-8eb02e90227f | L3 agent           | controller | :-)   | True           | neutron-l3-agent          |
| a824fd0c-394d-4ea2-9190-c2fc935c4b67 | Open vSwitch agent | controller | :-)   | True           | neutron-openvswitch-agent |
| e3d33150-1bb9-4446-8dda-ea232839a4be | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

root@controller:~# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | controller | internal | enabled | up    | 2015-06-09T14:12:54.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2015-06-09T14:12:53.000000 | -               |
| 3  | nova-conductor   | controller | internal | enabled | up    | 2015-06-09T14:12:54.000000 | -               |
| 5  | nova-cert        | controller | internal | enabled | up    | 2015-06-09T14:12:52.000000 | -               |
| 6  | nova-compute     | node1      | nova     | enabled | up    | 2015-06-09T14:13:01.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

Nova and openvswitch are up and running in the compute node.

And these are my "ovs-vsctl show" of my controller and node:

root@controller:~# ovs-vsctl show
d6f7af86-699a-4837-8d28-40d5ea484c1d
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-520ac3c9-10"
            Interface "qr-520ac3c9-10"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap05c7f43b-00"
            Interface "tap05c7f43b-00"
                type: internal
        Port "qg-9cded0be-65"
            Interface "qg-9cded0be-65"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.0"

root@node1:/var/log/neutron# ovs-vsctl show
65ea4be8-de83-41f5-94fc-b471873ee35a
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port "eth1"
            Interface "eth1"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.0"

Any help will be much apreciated

2015-06-07 16:14:43 -0500 received badge  Famous Question (source)
2015-06-06 17:27:02 -0500 received badge  Notable Question (source)
2015-06-02 21:10:51 -0500 received badge  Popular Question (source)
2015-06-02 09:50:59 -0500 asked a question Can not launch first instance on kilo after a fresh install

Hi,

I am new in OpenStack. I was following the ubuntu kilo doc step by step without any problem: http://docs.openstack.org/kilo/instal...

And when I am going to launch one instance (from horizon or from the controller console) it always fail with this error:

root@controller:~# cat /var/log/nova/nova-conductor.log
2015-06-02 16:13:35.466 1897 ERROR nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Error from last host: node1 (node node1.mydomain.org): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2219, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2362, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance d85de798-7be9-4f18-8bca-19daa722020a was re-scheduled: Unexpected vif_type=binding_failed\n']
2015-06-02 16:13:35.606 1897 WARNING nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations
    filter_properties)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2015-06-02 16:13:35.616 1897 WARNING nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Setting instance to ERROR state

On the node1 I can see this:

2015-06-02 16:13:34.271 7982 ERROR nova.compute.manager [req-fc155802-542e-46d0-95b2-c845b1f403e9 - - - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Instance failed to spawn
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a] Traceback (most recent call last):
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2442, in _build_resources
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     yield resources
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2314, in _build_and_run_instance
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     block_device_info=block_device_info)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2351, in spawn
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     write_to_disk=True)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4172, in _get_guest_xml
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     context)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4043, in _get_guest_config
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     flavor, virt_type)
2015-06-02 ...
(more)
2014-10-30 15:48:12 -0500 received badge  Famous Question (source)
2014-10-24 07:10:10 -0500 received badge  Notable Question (source)
2014-10-22 10:03:58 -0500 received badge  Popular Question (source)
2014-10-22 10:02:54 -0500 answered a question Error with neutron service on the controller: Connection to neutron failed: HTTPConnectionPool

Yes!. Thank you very much. The second link pointed to me into the right direction. In rabbitmq, user 'guest' can only connect via localhost. And I had configured his external IP.

So I did:

- change /etc/rabbitmq/rabbitmq-env.conf to 127.0.0.1
- change /etc/neutron/neutron.conf -> rabbit_host to 127.0.0.1
- rabbitmqctl change_password guest..... to my password again
- service rabbitmq-server restart
- service neutron-server restart

And now I can create the tenant network.

Thank you ;)

Regards,

2014-10-21 11:17:16 -0500 received badge  Student (source)
2014-10-21 11:04:34 -0500 asked a question Error with neutron service on the controller: Connection to neutron failed: HTTPConnectionPool

Hi All,

I am newbie in OpenStack and I am evaluating it on my home with 3 nodes environment, how the doc says (controller node, network node and compute node).

I am installing it on Debian and I have a problem at the end of the installation process when I am trying to create one network tenant. From the controller node I type "neutron net-create demo-net" and I get this error:

# neutron net-create demo-net
Connection to neutron failed: HTTPConnectionPool(host='controlnode.casadebertayguta.org', port=9696): Max retries exceeded with url: /v2.0/networks.json (Caused by ProtocolError('Connection aborted.', error(111, 'Conexi\xc3\xb3n rehusada')))

Neutron is running on control node:

# ps aux | grep neutron
neutron  27341  0.5  1.7  71736 52860 ?        S    16:57   0:06 /usr/bin/python2.7 /usr/bin/neutron-server --config-file=/etc/neutron/neutron.conf --log-file=/var/log/neutron/neutron-server.log

But it is strange because I see it no running on 9696 port:

# netstat -putona | grep 9696
#

...despite I have uncommented it on /etc/neutron/neutron.conf:

# Port the bind the API server to
bind_port = 9696

The only errors I can see in /var/log/neutron/neutron-server.log are related to issues with connecting with rabbitmq:

...
2014-10-21 17:20:21.685 27341 INFO neutron.openstack.common.rpc.common [-] Reconnecting to AMQP server on controlnode:5672
2014-10-21 17:20:24.719 27341 ERROR neutron.openstack.common.rpc.common [-] AMQP server on controlnode:5672 is unreachable: Socket closed. Trying again in 30 seconds.
...

... and If this is the issue why neutron is not starting on (and the main reasson for I can not create one tenant network), it is very weird because rabbitmq is running on the same controller node host and listening on that port:

# nc -vz controlnode 5672
controlnode.casadebertayguta.org [192.168.1.100] 5672 (amqp) open

# ps aux | grep rabbit
rabbitmq 26289  0.8  0.8 119524 26624 ?        Sl   16:20   0:32 
...-rabbit tcp_listeners [{"192.168.1.100",5672}]...

Any help will be much appreciated. Because I am stalled on this point of the installation when going to create one tenant network.

Thank you,

Gonzo