Mario Sommer's profile - activity

2019-04-24 07:08:27 -0500 received badge  Famous Question (source)
2018-08-21 13:51:41 -0500 received badge  Famous Question (source)
2018-08-21 13:51:41 -0500 received badge  Notable Question (source)
2018-06-05 09:24:43 -0500 received badge  Popular Question (source)
2018-06-05 09:24:43 -0500 received badge  Notable Question (source)
2018-03-23 04:49:29 -0500 received badge  Notable Question (source)
2018-03-23 04:49:29 -0500 received badge  Famous Question (source)
2017-11-29 05:35:22 -0500 received badge  Popular Question (source)
2017-11-26 09:19:13 -0500 commented question ovs_neutron_agent MessagingTimeout

neutron server is runnning with extreme load (91,4%), since the upgrade. I have no idea why. 7250 neutron 20 0 492648 176712 2304 R 91,4 0,6 6:01.33 neutron-server

2017-11-26 07:23:28 -0500 asked a question ovs_neutron_agent MessagingTimeout

After an upgrade MItaka->Newton->Ocata some of my compute nodes can't spawn instances any more. In nova-compute.log I see, that the network does not work.

ERROR nova.compute.manager [req-62abe932-057a-4710-beba-7785fdd1fb52 15d72e13a10f46a5a65359f6c4e42b0f e660c9a7cd624e32944ba0b760a82044 - - -] [instance: fa8c602c-217e-4a28-87b2-36d152cac504] Failed to allocate network(s)
ERROR nova.compute.manager [instance: fa8c602c-217e-4a28-87b2-36d152cac504] VirtualInterfaceCreateException: Virtual Interface creation failed

In openvswitch-agent.log I see, that the neutron times out. Anyone has an idea what could be wrong?

ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-f1f898d1-ac40-4868-b1a4-0e7fd3b4d06d - - - - -] Error while processing VIF ports
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 2039, in rpc_loop
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     port_info, ovs_restarted)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     return f(*args, **kwargs)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 1642, in process_network_ports
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     failed_devices['added'] |= self._bind_devices(need_binding_devices)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 878, in _bind_devices
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     self.conf.host)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 146, in update_device_list
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     agent_id=agent_id, host=host)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 174, in call
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     time.sleep(wait)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     self.force_reraise()
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent     six.reraise(self.type_, self.value, self.tb)
    2017-11-26 13:27:25.459 18335 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages ...
(more)
2017-11-26 03:43:44 -0500 answered a question Cinder Ceph fails after upgrade

After 12 hours of debugging I finally found the answer (https://openstack.nimeyo.com/117879/o... )

This totally sucks! How can such a big bug slip trough any QA?

2017-11-25 14:17:37 -0500 asked a question Cinder Ceph fails after upgrade

I just upgraded MItaka -> Newton -> Ocata I did not change the cinder configuration but after restart my cinder ceph volume service is down. Even with debug, the only messages get is:

DEBUG cinder.volume.drivers.rbd [req-1d7d6abf-0aa7-46f3-945c-c701aac2a529 - - - - -] connecting to ceph (timeout=-1). _connect_to_rados /usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py:300

ERROR cinder.service [-] Manager for service cinder-volume kmgmt01.XXXX.XXt@Ceph-SAS is reporting problems, not sending heartbeat. Service will appear "down".

I have no idea what to do now.

2017-11-06 06:27:19 -0500 answered a question Cinder backend driver to support multiattach function

Be aware that nova does not support multi-attach. So cinder multi-attach is not usable yet, no matter what backend you use.

2017-07-03 01:57:44 -0500 received badge  Famous Question (source)
2017-05-17 15:19:32 -0500 received badge  Popular Question (source)
2017-05-12 11:28:10 -0500 received badge  Notable Question (source)
2017-05-12 11:28:10 -0500 received badge  Popular Question (source)
2017-05-12 04:12:57 -0500 asked a question cinder nfs backup performance

Hello,

Does anyone actually use cinder backup to NFS in production?

In my test system (6 Core XeonE5-2603 @ 1.70GHz) even a single small backup job (10GB volume) creates so much CPU load, that the cinder-backup service fails to send its status to RabbitMQ in time. Setting the cinder-backup service "down" and prohibiting a second backup job to be started .

2017-05-12 10:38:41.541 18978 WARNING oslo.service.loopingcall [req-d1110b12-7e8f-4eea-9ee0-fafb27614906 Openstack Admin 15d30213519a4e55aa38821b1dc9f21c - - -] Function 'cinder.service.Service.report_state' run outlasted interval by 118.31 sec

Are there any recommendations for tuning the cinder backup service?

2017-03-22 04:49:17 -0500 asked a question Live Migration fails

Hi,

I'm (still) on Liberty and I have 2 compute nodes with different CPU Types, one Broadwell-noTSX and one Haswell-noTSX. To allow live migration between the hosts I set both CPU types to the minimal CPU feature set (which is the Haswell one)

cpu_mode=custom
cpu_model=Haswell-noTSX

With ps I see that the instances are created with -cpu Haswell-noTSX, even on the Broadwell node.

qemu     18585 51.2  1.9 18142588 15639048 ?   Sl   08:44  58:59 /usr/libexec/qemu-kvm -name instance-0000460d -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX, [...]

But still live migration Broadwell -> Haswell fails

2017-03-22 08:58:20.769 16275 WARNING nova.virt.libvirt.driver [req-6079712a-957e-4cbb-905b-f731a4ed850d Openstack Admin 512f554db789466f9a205e95edc24643 - - -] [instance: 4c45d1ca-90cc-4e55-87e3-8c6794a2439d] An error occurred trying to live migrate. Falling back to legacy live migrate flow. Error: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: smap, adx, rdseed, rtm, hle, 3dnowprefetch; try using 'Broadwell-noTSX' CPU model

What am I doing wrong?

2015-10-19 17:41:43 -0500 received badge  Student (source)
2015-09-21 14:38:57 -0500 received badge  Famous Question (source)
2015-08-31 06:47:42 -0500 received badge  Notable Question (source)
2015-08-31 06:42:40 -0500 commented answer GRE inside Openstack Instances

Got it!

modprobe nf_conntrack_proto_gre fixed it. Now the traffic isn't marked as invalid any more.

2015-08-31 06:33:21 -0500 commented answer GRE inside Openstack Instances

The iptables rules is there, but it is never triggered

0 0 RETURN 47 -- * * 0.0.0.0/0 0.0.0.0/0

2015-08-31 06:30:54 -0500 received badge  Editor (source)
2015-08-31 06:29:07 -0500 answered a question GRE inside Openstack Instances

I traced it back to this rule on the physical compute node (c40340f6-1 is the virtual compute node's interface)

Chain neutron-openvswi-oc40340f6-1 (2 references)
pkts bytes target     prot opt in     out     source             destination         
8468  597K DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0           state INVALID

Seems like GRE inside GRE is considered invalid?!

2015-08-28 13:31:23 -0500 received badge  Popular Question (source)
2015-08-27 15:08:52 -0500 asked a question GRE inside Openstack Instances

I try to clone my working physical Openstack system (Controller, Compute, Network Node) inside the Openstack enviroment itself (with qemu instead of kvm). Everything works fine, except for the network part.

I use the same network setup and neutron configuration as in the physical enviroment (IPs/neutron/OVS/GRE). ovs-vsctl show looks good, all bridges,ports and the gre interface with the correct local an remote_ip are present. Both nodes can ping one another on the eth device that is used for the GRE tunnel. tcpdump shows outgoing GRE packages on the virtual compute node, but no GRE packages ever reach the eth device on virtual network node. To me it looks like the pyhsical system is "eating" the GRE packages. Can someone confirm that it is at all possible to have GRE tunnels between Openstack instances?

2015-05-29 00:34:05 -0500 received badge  Enthusiast