Ask Your Question

jkilborn's profile - activity

2018-04-05 21:17:42 -0500 received badge  Famous Question (source)
2018-03-13 09:36:50 -0500 received badge  Famous Question (source)
2017-05-08 20:42:58 -0500 received badge  Notable Question (source)
2017-05-03 11:04:57 -0500 received badge  Popular Question (source)
2017-04-28 16:00:22 -0500 asked a question Ocata Neutron Error : Interface name must be shorter than IFNAMSIZ (15)

I'm setting up ocata on a new environment, and have everything running. My vms wont spawn without an error when it tries to setup the interface in neutron. I can see in the neutron-openvswitch logs that its failing due to trying to apply iptables rules. Seems its using a long interface name for some reason.

ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; Stderr: iptables-restore v1.4.21: interface name `3a3dddce-4752-4ddb-8442-6ef69b7e0fbd' must be shorter than IFNAMSIZ (15)

Any idea if this is the name it should be using? I think its the neutron port-id, but its over 15 chars. Why isn't it using the tap name?

Here are the setting I have uncommented in openvswitch_agent.ini

[DEFAULT]
debug = true
[agent]
tunnel_types = vxlan
l2_population = true
[ovs]
local_ip = XXX.XXX.XXX.XXX
bridge_mappings = provider:br-provider
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_ipset = true
2017-04-26 00:58:09 -0500 received badge  Notable Question (source)
2017-04-13 08:17:35 -0500 commented answer "nova start" fails to find cinder volume, yet volume is attached to node

Still don't know where nova gets the connection_info from ... Its not stored in cinder db or nova db.

2017-04-13 08:16:50 -0500 commented answer "nova start" fails to find cinder volume, yet volume is attached to node

I tried multpath -F, and also rebooted the host. The san still mapped the volume as lun 1 over the fibre channel, and nova continued to try to find it at lun5. I actually had to unmap it from the Dell San Side, and remap, specifying to use lun5. Then nova would find the volume and start the vm.

2017-04-13 08:15:12 -0500 received badge  Popular Question (source)
2017-04-10 15:20:12 -0500 asked a question "nova start" fails to find cinder volume, yet volume is attached to node

Upgraded from liberty to newton. Trying to start vm back up on upgraded compute node. The vm was shutdown, then the update was performed, now I can't start that vm. I can create a new vm, cinder backed, and it creates the volume on the dell san , and attaches to host, and starts fine. However, the existing VM keeps failing to start.

The device in the libvirt.xml shows the disk dev as :

source dev="/dev/disk/by-id/dm-uuid-mpath-36000d3100314f600000000000000018a"/

and multipath -l shows the device as well

mpathcd (36000d3100314f600000000000000018a) dm-2 COMPELNT,Compellent Vol
size=60G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active
1:0:0:1 sdb 8:16 active undef running
1:0:3:1 sdd 8:48 active undef running
8:0:0:1 sdc 8:32 active undef running
8:0:3:1 sde 8:64 active undef running

If I look at the devices that the multipathd output shows them as (sdb-sdd) [root@div18oscomp2 by-path]# ls -l | grep sd[b-e]
lrwxrwxrwx. 1 root root 9 Apr 10 06:13 pci-0000:04:00.0-fc-0x5000d3100314f624-lun-1 -> ../../sdd lrwxrwxrwx. 1 root root 9 Apr 10 06:13 pci-0000:04:00.0-fc-0x5000d3100314f626-lun-1 -> ../../sdb lrwxrwxrwx. 1 root root 9 Apr 10 06:13 pci-0000:04:00.1-fc-0x5000d3100314f623-lun-1 -> ../../sde lrwxrwxrwx. 1 root root 9 Apr 10 06:13 pci-0000:04:00.1-fc-0x5000d3100314f625-lun-1 -> ../../sdc

Yet, in the below log, its looking for a different device path Looking for Fibre Channel dev /dev/disk/by-path/pci-0000:04:00.1-fc-0x5000d3100314f624-lun-5

Why is nova looking for lun5, when the volume is on lun-1???? Where does it get lun5 from??

Any help would be appreciated

nova-compute log shows: 2017-04-10 15:08:51.325 18896 DEBUG oslo_messaging._drivers.amqpdriver [-] CALL msg_id: 7a593c6f571645a79159d376df77d425 exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:448 2017-04-10 15:08:51.343 18896 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 7a593c6f571645a79159d376df77d425 __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:296 2017-04-10 15:08:51.800 18896 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 8ad49bc12bfd4316a1bfe83e00de9c98 __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:196 2017-04-10 15:08:51.812 18896 DEBUG oslo_messaging._drivers.amqpdriver [req-75d5269a-de16-4586-8c13-477ea3d446a4 c42f3db35ab54eb69f2c829523c81668 b713ca31297341ad934ad5a19f2ab467 - - -] CALL msg_id: ccea24299fac4ba98ac4c6d28654ed2e exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:448 2017-04-10 15:08:51.827 18896 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: ccea24299fac4ba98ac4c6d28654ed2e __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:296 2017-04-10 15:08:51.829 18896 DEBUG nova.objects.instance [req-75d5269a-de16-4586-8c13-477ea3d446a4 c42f3db35ab54eb69f2c829523c81668 b713ca31297341ad934ad5a19f2ab467 - - -] Lazy-loading 'flavor' on Instance uuid f248c4bb-95f7-489e-9442-2d5f7588c2c9 obj_load_attr /usr/lib/python2.7/site-packages/nova/objects/instance.py:1013 2017-04-10 15:08:51.831 18896 DEBUG oslo_messaging._drivers.amqpdriver [req-75d5269a-de16-4586-8c13-477ea3d446a4 c42f3db35ab54eb69f2c829523c81668 b713ca31297341ad934ad5a19f2ab467 - - -] CALL msg_id: 2206f6bd1d054150af801dc9e62a978c exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:448 2017-04-10 15:08:51.884 18896 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 2206f6bd1d054150af801dc9e62a978c __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:296 2017-04-10 ... (more)

2017-03-23 12:55:39 -0500 received badge  Scholar (source)
2017-03-23 12:55:07 -0500 commented answer Urgent : newton nova db sync - not migrating compute_nodes table

I found the problem. While mitaka updates the schema (adds a uuid column to compute_nodes as an example), the uuid field doesnt get populated until the nova_compute node starts up, and checks in. Then, the uuid field gets populated. You can't upgrade to newton directly from liberty, due to this issu

2017-03-23 11:15:16 -0500 received badge  Supporter (source)
2017-03-23 11:15:14 -0500 commented answer Urgent : newton nova db sync - not migrating compute_nodes table

Thanks for the response. I finally got back to trying to upgrade my db again. It upgrades from liberty to mitaka fine, but during the upgrade to newton, I get the error again. I looked at the compute_nodes table, and all 15 records have uuid null.

I'm not sure why they are null.

2017-02-25 20:12:02 -0500 received badge  Famous Question (source)
2017-01-29 16:31:38 -0500 received badge  Popular Question (source)
2017-01-17 08:13:12 -0500 received badge  Notable Question (source)
2017-01-15 03:42:17 -0500 received badge  Popular Question (source)
2017-01-14 17:37:47 -0500 commented answer Urgent : newton nova db sync - not migrating compute_nodes table

I created the nova_api_cell0, granted the nova user rights. Then ran

[root@div18oscont1 nova]# nova-manage cell_v2 simple_cell_setup --transport-url transport_url=rabbit://....

That completed without errors. But didnt fix the error preventing the db upgrade beyond 329

2017-01-14 16:38:06 -0500 commented question Urgent : newton nova db sync - not migrating compute_nodes table

After more testing, that error is generated as the nova db tries to migrate from 329 to 330. It cant complete this migration and produces the error about the compute_nodes table;

2017-01-14 15:45:08 -0500 asked a question Urgent : newton nova db sync - not migrating compute_nodes table

I'm upgrading from liberty to netwon. I have keystone, dashboard, glance, cinder working so far. Now doing nova. I created the nova_api database (due to it being added in mitaka).

I ran the su -s /bin/sh "nova-manage api_db sync" nova without any issues

When I try to run su -s /bin/sh -c "nova-manage db sync" nova it outputs the following :

WARNING: cell0 mapping not found - not syncing cell0.

error: There are still 15 unmigrated records in the compute_nodes table. Migration cannot continue until all records have been migrated.

Can't seem to find much help on error with the compute_nodes table migration, and I can't get the nova db to migrate to the newton schema.

Any ideas?

2017-01-11 16:11:36 -0500 received badge  Famous Question (source)
2017-01-11 16:11:14 -0500 received badge  Editor (source)
2017-01-11 10:25:10 -0500 asked a question ImportError: No module named networking_mlnx.plugins.ml2.drivers.mlnx.agent

I'm trying to add infiniband interface to existing openstack liberty environment. We are going to use this for a backup network. Following this document: https://wiki.openstack.org/wiki/Mella...

When starting neturon-mlnx-agent on compute node (last line of document), it fails with

Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: Traceback (most recent call last):
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: File "/usr/bin/neutron-mlnx-agent", line 6, in <module>
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: from neutron.cmd.eventlet.plugins.mlnx_neutron_agent import main
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: File "/usr/lib/python2.7/site-packages/neutron/cmd/eventlet/plugins/mlnx_neutron_agent.py", line 15, in <module>
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: from neutron.plugins.ml2.drivers.mlnx.agent import eswitch_neutron_agent
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py", line 19, in <module>
Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: from networking_mlnx.plugins.ml2.drivers.mlnx.agent import (
**Jan 11 10:12:49 div18oscomp2 neutron-mlnx-agent: ImportError: No module named-networking_mlnx.plugins.ml2.drivers.mlnx.agent**
Jan 11 10:12:49 div18oscomp2 systemd: neutron-mlnx-agent.service: main process exited, code=exited, status=1/FAILURE
Jan 11 10:12:49 div18oscomp2 systemd: Unit neutron-mlnx-agent.service entered failed state.
Jan 11 10:12:49 div18oscomp2 systemd: neutron-mlnx-agent.service failed.

Turns out the /usr/lib/python2.7/site-packages/networking_mlnx directory is missing due to the package python-networking-mlnx is missing. Its not in the Centos 7 liberty repo either. Anyone what repo that package is located?

2016-06-24 10:05:59 -0500 received badge  Popular Question (source)
2016-06-24 10:05:59 -0500 received badge  Notable Question (source)
2016-01-27 01:31:08 -0500 received badge  Famous Question (source)
2015-11-23 07:09:12 -0500 answered a question Glance with SSL: sslv3 alert handshake failure

Try commenting out the ca_file = /etc/glance/ssl/certs/ca.pem from the glance-api.conf Having the ca_file line in the config means it will try to do client certificate validation. You likely don't want this. You just want the server to provide a certificate to the client.

2015-11-17 14:01:30 -0500 commented question Glance with SSL: sslv3 alert handshake failure

glance image-list Error finding address for https://hostname.domain.com:9292/versions (https://hostname.domain.com:9292/vers...): bad handshake: Error([('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure')],)

2015-11-17 14:01:13 -0500 commented question Glance with SSL: sslv3 alert handshake failure

I'm experiencing the same error when converting my endpoints to ssl. keystone seems to work ok, but glance, nova, cinder, etc all get this error. Uggh

2015-11-13 01:16:58 -0500 received badge  Notable Question (source)
2015-10-28 16:22:30 -0500 commented answer liberty: dhcp agent not updating dnsmasq hosts file without restart of agent

Thanks for the response. We are using a flat provider network, so we dont have an l3 router. Our network is created as neutron net-create MYNETWORK --shared --provider:network_type flat --provider:physical_network provider

Can you point me to this bug, so I can take a look?

2015-10-28 08:52:58 -0500 received badge  Popular Question (source)
2015-10-27 09:26:49 -0500 answered a question os_brick multipath liberty

I have resolved this myself, so want to pass along the solution for liberty.

Turns out there is a file that uses a nova configuration setting that is rather ambiguous. Looking at .../nova/virt/libvirt/volume/fibrechannel.py there is an init function

class LibvirtFibreChannelVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):

"""Driver to attach Fibre Channel Network volumes to libvirt."""
def __init__(self, connection):
    super(LibvirtFibreChannelVolumeDriver,self).__init__(connection, is_block_dev=False)
    # Call the factory here so we can support
    # more than x86 architectures.
    self.connector = connector.InitiatorConnector.factory('FIBRE_CHANNEL', utils.get_root_helper(),
         use_multipath=CONF.libvirt.iscsi_use_multipath, 
         device_scan_attempts=CONF.libvirt.num_iscsi_scan_tries)

The interesting thing is that this Fibre Channlel volume driver init function(above) is using an iscsi_use_multipath conf settting.

So, edit nova.conf, and in the [libvirt] section, add iscsi_use_multipath = True

This sets the driver to try to use multipath.

Seems that his needs to be clarified, or the setting should be changed from iscsi_use_multipath to just use_multipath

2015-10-26 12:45:53 -0500 asked a question os_brick multipath liberty

I have been working with liberty and a san FC driver in cinder, and while is does seem to be better at adding/remove the mpath devices and corresponding /dev/sd device names, it seems to be using the wrong path in the instance that is being created using that backend cinder volume. I created a cinder backed instance on our fiberchannel san.

In the libvirt.xml file, it shows the following for the device: <disk type="block" device="disk"> <driver name="qemu" type="raw" cache="none"/> <source dev="/dev/disk/by-path/pci-0000:42:00.0-fc-0x5000d3100314f623-lun-1"/> <target bus="virtio" dev="vda"/> <serial>55ba0cef-b6cb-4be0-be92-cd53bcacaae9</serial> </disk>

That source dev path is a link to /dev/sdd (in this case)

It seems that using this path will break the whole multipath failover capabilities, as that device does not handle a path failure. The instance should use a multipath device name (I think)

You can see below, the multipath device and the corresponding /dev/sd devices that are found under that multipath device

multipath -l

multipath -l mpathaf (36000d3100314f6000000000000000056) dm-2 COMPELNT,Compellent Vol size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active

|- 1:0:0:1 sdc 8:32 active undef running

|- 1:0:3:1 sdd 8:48 active undef running

|- 8:0:3:1 sdf 8:80 active undef running

|- 8:0:0:1 sde 8:64 active undef running

Shouldn't the instance.xml provide the /dev/mapper/mpathaf device to the instance rather than /dev/sdd?

Is this something that the san driver controls, or is do I need to look more at the os_brick code?

Thanks for any insight you might be able to provide

2015-10-23 16:34:59 -0500 asked a question liberty: dhcp agent not updating dnsmasq hosts file without restart of agent

Since I upgraded from kilo to liberty in our pre-production environment, I'm having a problem with the dhcp agent. When I create an instance, it assigns and IP from the dhcp pool, but doesn't add it to the dnsmasq hosts (addn_hosts) file. So when the instance boots and does it dhcp discover, dnsmasq doesnt see the mac address in the hosts file, so it logs "no address available", and the instance doesn't get an IP. If I bounce the neutron-dhcp-agent, it rebuilds the dnsmasq files, and my instance will get a dhcp address. Something is not updating file during normal instance creation, but I'm not sure where that code is. Has anyone seen this with liberty? Is it a bug? It was working find in kilo.

Thanks.

2015-09-03 09:02:20 -0500 received badge  Enthusiast