hanizang's profile - activity

2016-10-03 17:15:18 -0500 received badge  Notable Question (source)
2016-10-03 17:15:18 -0500 received badge  Famous Question (source)
2016-05-26 15:09:21 -0500 received badge  Popular Question (source)
2016-05-23 16:11:59 -0500 received badge  Popular Question (source)
2016-05-22 01:11:45 -0500 asked a question How to setup a bridge for multiple compute nodes?

I have one controller and two compute nodes which is configured with nova network. I wanted to make a bridge but just one compute node gets a bridge after the execution of instructions below.

controller@controller:~$ nova network-create vm-net --bridge br100 --multi-host T --fixed-range-v4 10.0.0.0/24

controller@controller:~$ source admin-openrc.sh

controller@controller:~$ nova network-list

+--------------------------------------+--------+-------------+

| ID                                   | Label  | Cidr        |

+--------------------------------------+--------+-------------+

| bd0fadba-4416-4ad3-afab-67e465d8c264 | vm-net | 10.0.0.0/24 |
+--------------------------------------+--------+-------------+

Here's my nova.conf of compute node that has no br100 in 'ifconfig' result. What do I need to configure?

[DEFAULT]

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

iscsi_helper=tgtadm

libvirt_use_virtio_for_bridges=True

connection_type=libvirt

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

volumes_path=/var/lib/nova/volumes

enabled_apis=ec2,osapi_compute,metadata

auth_strategy = keystone

rpc_backend = rabbit

rabbit_host = 192.168.0.40

rabbit_password = rabbitpass

my_ip = 192.168.0.5

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 192.168.0.5

novncproxy_base_url = http://192.168.0.40:6080/vnc_auto.html

glance_host = 192.168.0.40​

network_api_class = nova.network.api.API

security_group_api = nova

firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

network_manager = nova.network.manager.FlatDHCPManager

network_size = 254

allow_same_net_traffic = False

multi_host = True

send_arp_for_ha = True

share_dhcp_address = True

force_dhcp_release = True

flat_network_bridge = br100

flat_interface = eth1

public_interface = eth1

[database]

connection = mysql://nova:novadbpass@192.168.0.40/nova

[keystone_authtoken]

auth_uri = http://192.168.0.40:5000

auth_host = 192.168.0.40

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = novapass
2016-05-21 23:51:00 -0500 commented question Migration failure, having problems with compute nodes

I've uploaded one of my nova.conf file on compute node above.

2016-05-21 23:47:59 -0500 received badge  Editor (source)
2016-05-20 12:03:22 -0500 asked a question Migration failure, having problems with compute nodes

I'm doing my job in OpenStack Icehouse on Ubuntu 14.04 LTS with an NFS shared storage. I have one Controller node and two compute nodes that are connected by nova network. I want to migrate an instance that is in compute node named 'compute1' to compute node 'compute2'. I wanted to show the instances that are in 'compute1' but got messages like below.

controller@controller:~$ nova-manage vm list

instance   node            type       state      launched                   image     kernel    ramdisk    project    user       
zone       index

Command failed, please check log for more info

2016-05-20 22:18:41.514 10461 CRITICAL nova [req-d1df37b6-e57e-474e-858c-70e1df5b19da None None] 

OperationalError: (OperationalError) unable to open database file None None

Meanwhile, the instances on 'compute1' are running without problems.

controller@controller:~$ nova list

+--------------------------------------+-------+--------+------------+-------------+--------------------------------+

| ID                                   | Name  | Status | Task State | Power State | Networks                       |

+--------------------------------------+-------+--------+------------+-------------+--------------------------------+

| e1c0a2fc-28aa-4ba9-bc8c-3f0982499cb4 | test1 | ACTIVE | -          | Running     | vm-net=10.0.0.3, 172.26.17.166 |

| 2fa3958e-4bc1-4467-a16a-b576330fa27d | test2 | ACTIVE | -          | Running     | vm-net=10.0.0.4, 172.26.17.167 |

+--------------------------------------+-------+--------+------------+-------------+--------------------------------+

After executing migration instruction i got an error message.

controller@controller:~$ nova live-migration e1c0a2fc-28aa-4ba9-bc8c-3f0982499cb4 compute2

ERROR: Compute service of compute1 is unavailable at this time. (HTTP 400) (Request-ID: req-38ceef0a-c460-43f0-bba4-9abd1ef73ec9)

So I checked nova service-list.

controller@controller:~$ nova service-list

+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+

| Binary           | Host       | Zone     | Status   | State | Updated_at                 | Disabled Reason                     |

+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+

| nova-cert        | controller | internal | enabled  | up    | 2016-05-20T13:20:46.000000 | -                                   |

| nova-consoleauth | controller | internal | enabled  | up    | 2016-05-20T13:20:46.000000 | -                                   |

| nova-scheduler   | controller | internal | enabled  | up    | 2016-05-20T13:20:44.000000 | -                                   |

| nova-conductor   | controller | internal | enabled  | up    | 2016-05-20T13:20:42.000000 | -                                   |

| nova-compute     | compute1   | nova     | enabled  | down  | 2016-05-17T11:40:08.000000 | None                                |

| nova-network     | compute1   | internal | enabled  | up    | 2016-05-20T13:20:44.000000 | -                                   |

| nova-compute     | compute2   | nova     | disabled | down  | 2016-05-20T12:40:49.000000 | AUTO: Connection to libvirt lost: 1 |

| nova-network     | compute2   | internal | enabled  | down  | 2016-05-20T12:40:42.000000 | -                                   |

+------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+

To solve the problem in compute nodes I restarted nova-compute, nova-network, nova-api-metadata but it didn't worked. On compute node2 I found those message on libvirtd.log

2016-05-20 12:31:37.744+0000: 4368: info : libvirt version: 1.2.2

2016-05-20 12:31:37.744+0000: 4368: error : netcfStateCleanup:109 : internal error: Attempt to close netcf state drive with open connections

What should I do? This is my nova.conf file on compute node.

[DEFAULT]

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

iscsi_helper=tgtadm

libvirt_use_virtio_for_bridges=True

connection_type=libvirt

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

volumes_path=/var/lib/nova/volumes

enabled_apis=ec2,osapi_compute,metadata

auth_strategy = keystone

rpc_backend = rabbit

rabbit_host = 192.168.0.40

rabbit_password = rabbitpass

my_ip = 192.168.0.5

vnc_enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 192.168.0.5

novncproxy_base_url = http://192.168.0.40:6080/vnc_auto.html

glance_host = 192.168.0.40​

network_api_class = nova.network.api.API

security_group_api = nova

firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver

network_manager = nova.network.manager.FlatDHCPManager

network_size = 254
allow_same_net_traffic = False

multi_host = True

send_arp_for_ha = True

share_dhcp_address = True

force_dhcp_release = True

flat_network_bridge = br100

flat_interface = eth1

public_interface = eth1

[database]

connection = mysql://nova:novadbpass@192.168.0.40/nova

[keystone_authtoken ...
(more)
2016-05-20 12:03:21 -0500 asked a question Migration failure, something wrong with compute nodes

I'm doing my job in OpenStack Icehouse on Ubuntu 14.04 LTS with an NFS shared storage. I have one Controller node and two compute nodes that are connected by nova network. I want to migrate an instance that is in compute node named 'compute1' to compute node 'compute2'. I wanted to show the instances that are in 'compute1' but got messages like below.

controller@controller:~$ nova-manage vm list instance node type state launched image kernel ramdisk project user zone index Command failed, please check log for more info 2016-05-20 22:18:41.514 10461 CRITICAL nova [req-d1df37b6-e57e-474e-858c-70e1df5b19da None None] OperationalError: (OperationalError) unable to open database file None None

Meanwhile, the instances on 'compute1' are running without problems.

controller@controller:~$ nova list +--------------------------------------+-------+--------+------------+-------------+--------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+--------------------------------+ | e1c0a2fc-28aa-4ba9-bc8c-3f0982499cb4 | test1 | ACTIVE | - | Running | vm-net=10.0.0.3, 172.26.17.166 | | 2fa3958e-4bc1-4467-a16a-b576330fa27d | test2 | ACTIVE | - | Running | vm-net=10.0.0.4, 172.26.17.167 | +--------------------------------------+-------+--------+------------+-------------+--------------------------------+

After executing migration instruction i got an error message.

controller@controller:~$ nova live-migration e1c0a2fc-28aa-4ba9-bc8c-3f0982499cb4 compute2 ERROR: Compute service of compute1 is unavailable at this time. (HTTP 400) (Request-ID: req-38ceef0a-c460-43f0-bba4-9abd1ef73ec9)

So I checked nova service-list.

controller@controller:~$ nova service-list +------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+

| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+ | nova-cert | controller | internal | enabled | up | 2016-05-20T13:20:46.000000 | - |

| nova-consoleauth | controller | internal | enabled | up | 2016-05-20T13:20:46.000000 | - |

| nova-scheduler | controller | internal | enabled | up | 2016-05-20T13:20:44.000000 | - |

| nova-conductor | controller | internal | enabled | up | 2016-05-20T13:20:42.000000 | - |

| nova-compute | compute1 | nova | enabled | down | 2016-05-17T11:40:08.000000 | None |

| nova-network | compute1 | internal | enabled | up | 2016-05-20T13:20:44.000000 | - |

| nova-compute | compute2 | nova | disabled | down | 2016-05-20T12:40:49.000000 | AUTO: Connection to libvirt lost: 1 |

| nova-network | compute2 | internal | enabled | down | 2016-05-20T12:40:42.000000 | - | +------------------+------------+----------+----------+-------+----------------------------+-------------------------------------+

To solve the problem in compute nodes I restarted nova-compute, nova-network, nova-api-metadata but it didn't worked. On compute node2 I found those message on libvirtd.log

2016-05-20 12:31:37.744+0000: 4368: info : libvirt version: 1.2.2 2016-05-20 12:31:37.744+0000: 4368: error : netcfStateCleanup:109 : internal error: Attempt to close netcf state drive with open connections

What should I do?