MattF's profile - activity

2018-10-30 01:01:19 -0500 received badge  Famous Question (source)
2018-04-04 07:18:52 -0500 commented answer Using NFS but overview shows high "local storage" use and get no valid host launching instance.

Hi @Deepa yes I tried it today, I removed all the instances, all the volumes, ran the commands... and then I started a whole lot of new instances / volumes and the problem returned. I don't think it's quota related now. But I'm still lost. I am keeping investigating and thanks for your help.

2018-03-27 05:50:20 -0500 received badge  Student (source)
2018-03-27 03:49:48 -0500 commented answer Using NFS but overview shows high "local storage" use and get no valid host launching instance.

My apologies Deepa. I have a friend with this name, and must have filled in the blank... sorry.

Thanks very much, I will try the commands later today and see if it works.

2018-03-26 10:07:31 -0500 commented answer Using NFS but overview shows high "local storage" use and get no valid host launching instance.

Hi Deepak. Sorry for my late reply... Yes can you provide the commands you think I need to run? I'm lost at this point Thanks very much.

2018-03-23 07:55:05 -0500 received badge  Notable Question (source)
2018-03-21 23:52:33 -0500 received badge  Popular Question (source)
2018-03-21 08:22:36 -0500 commented answer Using NFS but overview shows high "local storage" use and get no valid host launching instance.

Ran out of comment space. But I think you are right about where the 2TB comes from. It does match the amount of disk space available on all the compute nodes combined. I just can't get why it think im using it when I'm using NFS.

2018-03-21 08:20:09 -0500 commented answer Using NFS but overview shows high "local storage" use and get no valid host launching instance.

Thanks for your answer. But why would it try to use space from the local storage when I'm using netapp remote NFS server? The amount of free local disk space is not decreasing. The NFS used space is increasing. So why would it not be successful when I don't use local storage for my instances Thanks

2018-03-20 09:41:55 -0500 asked a question Using NFS but overview shows high "local storage" use and get no valid host launching instance.

I'm having an issue where I tried to launch 10 instances that have 160GB disks.

I have 6 compute instances, and very little running on them apart from about 4/5 test machines. I have 10TB assigned via netapp NFS server and it seems to be working ok.

The problem came today , when I tried to launch 10 instances. It launched 4 of them and stopped with the "no valid host" message. The logs are really unhelpful and this is all there is ...

nova-scheduler.log:2018-03-20 14:20:08.556 2299 INFO nova.filters [req-4654ed16-cb96-4878-a6d2-cfff7ea8703d 8dca0f7fabf94914b4d02f040792e18c 71792c2c288f441c91323049918e327c - default default] Filtering removed all hosts for the request with instance ID 'a9e92da4-642a-499e-9ac5-472f6f5a0d57'. Filter results: ['RetryFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'RamFilter: (start: 1, end: 1)', 'DiskFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 0)']

There's nothing else to go on in the logs except that.

So anyway, I decided to delete all my test VMs and now I am able to launch more instances. Yet I have over 9TB of free space on the NFS server. But what bothers me too is that if I go into Horizon, and I go to Admin > Compute > Hypervisors it was showing very high "local storage" use. Like 1.6TB out of 2TB.

Here is what it says now:

ah I can't post a link (not enough karma) anyway I was trying to post an image of it saying 1.1TB of 2TB local storage.

Where is it getting this 2TB limit from ??! And why ? I'm using a 10TB NFS. The local storage isn't decreasing

But why is this ? I'm not using local storage. In fact if I look at all of my compute nodes disk space I see :

/dev/mapper/centos-root  441G  4.1G  437G   1% /
devtmpfs                 252G     0  252G   0% /dev
tmpfs                    252G     0  252G   0% /dev/shm
tmpfs                    252G   18M  252G   1% /run
tmpfs                    252G     0  252G   0% /sys/fs/cgroup
/dev/sda1               1014M  191M  824M  19% /boot   9.5T  310G  9.2T   4% /var/lib/nova/mnt/8ee65072f6783845e3a3387891518c82
tmpfs                     51G     0   51G   0% /run/user/0

So the compute nodes have over 437GB of local space... and as you can see the NFS has over 9TB free. So why is openstack saying I have 2TB of local disk space and it's nearly all used?

How can I correct this so that it does not limit me because of "local storage"

2018-02-06 10:33:21 -0500 asked a question OpenStack Neutron failed to bind port. Using OVS LACP bond.


I'm having a bit of a problem with OpenStack Neutron saying that it failed to bind port on my host. We appear to be running tempest since I just did a packstack all in one on the server today.

It's a single server allinone packstack setup. No other servers. All components in one box etc.

Here's what I do to reproduce:

Install CentOS7 Install RDO RPM Run packstack --allinone

After that I go and create my openvswitch bridge and add two ports in a bond to it with LACP.

ovs-vsctl add-br br-vlan 
ovs-vsctl add-bond br-vlan bond0 enp2s0f0 enp2s0f1 lacp=active

this works. output of ovs-vsctl show :


Bridge br-vlan
        Port br-vlan
            Interface br-vlan
                type: internal
        Port "bond0"
            Interface "enp2s0f1"
            Interface "enp2s0f0"

check on the bond and LACP is showing active and has negotiated with the switch

ovs-appctl bond/show bond0
---- bond0 ----
bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: negotiated
active slave mac: xx:xx:xx:xx:xx (enp2s0f0) .     // note I obscured the MAC address. 

slave enp2s0f0: enabled
    active slave
    may_enable: true

slave enp2s0f1: enabled
    may_enable: true

so the openvswitch part seems to be configured correctly. the switch it's connected to is set to trunk and allow all VLANs.

Here is my ml2_conf.ini


#type_drivers = local,flat,vlan,gre,vxlan,geneve


network_vlan_ranges = vlan197:197:197

bridge_mappings = vlan197:br-vlan


I should point out we only have admin users, so tenant networks shouldn't matter but also just for fun i changed thenant_network_types to include vlan too but as expected nothing changed.

Now I go ahead and create a network on my VLAN197 network I want to use.

neutron net-create vlan197 --shared --provider:network_type vlan --provider:segmentation_id 197 --provider:physical_network vlan197 --router:external=True

This command works and returns the confirmation box with all the network details. Then I go and add a subnet to it but I don't enable DHCP (because one already exists in that VLAN)

neutron subnet-create vlan197 --name vlan197 --gateway --allocation-pool start=,end= --enable-dhcp=False

This command also returns confirmation it was sucesfully created. So now I go into Horizon, and I launch an instance and I put it on this "Vlan197" network. Instance fails to launch with error

2018-02-06 15:56:55.534 54380 ERROR neutron.plugins.ml2.managers [req-16ebae41-f75b-46f0-a1d6-4a6a6cfc5a02 f6658faff3bc48cdb027fb7c818bdcc3 41e55a650d6b4199bbf89c568b05f55c - default default] Failed to bind port 91829b84-14c7-40dc-afaf-f5b5e5f537a9 on host for vnic_type normal using segments [{'network_id': 'b926e3b6-c05c-44a4-b0eb-c65d4743f5fd', 'segmentation_id': 197, 'physical_network': u'vlan197', 'id': '0fbdb50b-cd9f-4727-b14a-217245a9e881', 'network_type': u'vlan'}]

I'm unable to find any further information about why it couldn't bind the port. Does anyone have any suggestions as to what I've done wrong please ?

Many thanks

2017-12-15 06:47:58 -0500 received badge  Popular Question (source)
2017-12-15 06:47:58 -0500 received badge  Famous Question (source)
2017-12-15 06:47:58 -0500 received badge  Notable Question (source)
2017-11-08 05:46:00 -0500 received badge  Enthusiast
2017-11-03 11:19:55 -0500 received badge  Editor (source)
2017-11-03 10:54:15 -0500 asked a question Nova compute on Hyper-V fails to bind to port

Hi All,

I have openstack pike running on CentOS 7 in multi-node config. One of my hypervisors is a Microsoft server running Cloudbase Nova/Hyper-V package. I am NOT using openvswitch on the hypervisor (the linux neutron server does) because I only want to do VLANs.

I have created two networks. External. type is vlan. I created the subnets but DHCP is off because there is a DHCP server on that VLAN already.

On my setup, the hyper-v host has two 10G ethernet interfaces and they are both set as VLAN trunks on the switches but NOT an LACP bond.

The hyper-v host has the following v-switches configured via the hyper-v virtual switch manager GUI:

br-mgmt -- only for managing this server remotely. Type external. Bound to 10G interface number 1

br-ext -- external bridge, bound to 10G interface number 2

br-data -- private

br-int -- internal

The error message I see in the nova-compute log file on the windows host is :

2017-11-03 15:25:33.234 2588 INFO [req-529034d5-b847-44d2-8193-8b3c03e0aa22 bf90a57666e5457690aa5ec78abb7df4 6f343d5d51f543ddb4394d46a70ac67d - default default] [req-37de82a4-850a-4e21-b687-ae9f48d8b071] Another thread already created a resource provider with the UUID 7f93f041-46e0-48ab-9457-ce04db233c63. Grabbing that record from the placement API. 2017-11-03 15:25:35.219 2588 WARNING [req-529034d5-b847-44d2-8193-8b3c03e0aa22 bf90a57666e5457690aa5ec78abb7df4 6f343d5d51f543ddb4394d46a70ac67d - default default] Unable to refresh my resource provider record: NotImplementedError 2017-11-03 15:25:35.391 2588 INFO nova.virt.block_device [req-529034d5-b847-44d2-8193-8b3c03e0aa22 bf90a57666e5457690aa5ec78abb7df4 6f343d5d51f543ddb4394d46a70ac67d - default default] [instance: 76b17679-b9c7-4594-84df-52250c98122c] Booting with blank volume at /dev/vda 2017-11-03 15:25:38.312 2588 ERROR nova.compute.manager [req-529034d5-b847-44d2-8193-8b3c03e0aa22 bf90a57666e5457690aa5ec78abb7df4 6f343d5d51f543ddb4394d46a70ac67d - default default] Instance failed network setup after 1 attempt(s): PortBindingFailed: Binding failed for port 6102b9cb-dba4-4fd2-a162-b699a885a5c7, please check neutron logs for more information. 2017-11-03 15:25:38.312 2588 ERROR nova.compute.manager Traceback (most recent call last):

neutron_hyperv-agent.log says nothing. only that it started ..

2017-11-03 15:10:25.349 2608 INFO neutron.common.config [-] Logging enabled! 2017-11-03 15:10:25.349 2608 INFO neutron.common.config [-] C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\neutron-hyperv-agent version

and if I check neutron logs on the neutron server I see pretty much the same error as nova-compute log on the hypervisor...

2017-11-03 16:25:37.340 3504 ERROR neutron.plugins.ml2.managers [req-01aedb63-3845-447b-9713-1bf16ba54f7d cc3eeb84ce2f43a58fb2688a88f8385f efa0ae35406f4a41b62d2d4a1387d870 - default default] Failed to bind port 6102b9cb-dba4-4fd2-a162-b699a885a5c7 on host oscompute03 for vnic_type normal using segments [{'network_id': '0533c612-1206-457f-b78a-18a1ef44d551', 'segmentation_id': 197, 'physical_network': u'extnet', 'id': '0fd66e44-3234-4268-8d62-532127577615', 'network_type': u'vlan'}]

[message is repeated several times, 10 attempts are made, all fail]

It might be worth pointing out that the linux neutron node has a linux LACP bond. The neutron server has the following open vSwitch configuration (made by the installer/packstack .. not by me)

[root@oscontrol neutron]# ovs-vsctl show dcece088-8a36-4bc9-9a8a-753331dc9911 Manager "ptcp:6640:" is_connected: true Bridge br-int Controller "tcp:" is_connected: true fail_mode: secure Port "qr-a81580cb-80" tag: 1 Interface "qr-a81580cb-80" type: internal Port "tapd6eadffb-11" tag: 3 Interface "tapd6eadffb-11" type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap2b4408cb-ae ... (more)

2017-11-02 14:48:23 -0500 received badge  Supporter (source)
2017-10-26 22:21:14 -0500 answered a question No valid host was found. There are not enough hosts available.

Here is what I did to fix it on my RDO packstack setup:

  1. Ensure that the virt_type is set to qemu in /etc/nova/nova.conf on all the compute nodes. Then restart them.
  2. Make a new network for testing. (If I chose existing ones it would fail?!)
  3. I was also getting a volume creation error so I added this line into /etc/glance/glance-api.conf - show_image_direct_url=True

Now the images boot just fine. But watch out also there is a default 20GB LVM size during a packstack install if you didn't change it in the answers file.

Overall I think Openstack need to vastly improve on this. I spent days reading and googling and experimenting when it seems to be a quite common issue. A fresh install following the userguides will fail, that's bad....