Ask Your Question

samb's profile - activity

2018-02-19 04:53:45 -0500 received badge  Famous Question (source)
2017-08-20 01:10:42 -0500 received badge  Famous Question (source)
2017-02-19 09:55:16 -0500 received badge  Notable Question (source)
2017-01-17 01:42:53 -0500 received badge  Popular Question (source)
2017-01-12 11:00:44 -0500 commented question Advice for updating QEMU?

Thanks a bunch!

2017-01-12 09:45:55 -0500 received badge  Student (source)
2017-01-11 15:41:09 -0500 asked a question Advice for updating QEMU?

While attempting a live migrate, I found out that one of our compute nodes is using QEMU 1.5, while the others are on version 2.3. I would like to update QEMU to 2.3, but do not want to lose any VMs, which I hear can happen going from QEMU 1.x to 2.x as 2.x is not backwards compatible.

Is it possible to snapshot each instance on that host, then update QEMU, then re-lauch each instance from a snapshot? Or will the snapshot maintain the QEMU 1.x metadata? Or is there an easier way?

2016-12-30 15:25:32 -0500 received badge  Enthusiast
2016-12-29 10:47:29 -0500 received badge  Notable Question (source)
2016-12-28 23:50:27 -0500 received badge  Popular Question (source)
2016-12-27 16:03:36 -0500 asked a question OVS has high cpu usage, experiencing packet drops

After a power outage, we started experience problems with OVS and networking on our compute nodes. The compute nodes are running Centos 7 and the Mitaka release of openstack with Neutron.

What appears to be happening is that OVS keeps re-adding the ethernet interface bridge. Because of the network problems, I cannot post the exact output logs, so forgive me for brevity.

Our bridge is "em2" and the "ovs-vswitchd.log" file show shows hundreds of lines like:

bridge em2: added interface em2 on port 65534

All the timestamps are milliseconds away. OVS is just constantly doing this. The contents of "journalctl -xe" support this because they show

device em2 has entered promiscuous mode
device em2 has left promiscuous mode

again, just over-and-over, hundreds of lines.

Finally, if I spam the command "ip link show em2" I will see it alternating between:

em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000

and

em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT qlen 1000

The packet drops can be seen as closely as at the physical switch to which the compute nodes are connected. I do not believe there is a problem with the switch itself, because another machine of our is connected to it and it is not experiencing any problems with OVS or packet loss.

ISSUE HAS BEEN FIXED

I guess one of our ifcfg-* files was incorrect. In "ovs-vsctl show" we had "em2" as its own bridge with no ports except itself. All we had to do was "ovs-vsctl del-br em2" and the issue was resolved.

So I think what was happening was that the interface was being added to the em2 bridge, which was not connected to anything so packets would be lost, then it would be disconnected to the bridge, then a packet would go through, then it would attempt to reconnect it to the bridge, and so on. I really don't understand OVS so I can't provide an accurate description, but hopefully this will help someone.

2016-12-25 20:49:59 -0500 received badge  Famous Question (source)
2016-12-23 14:05:48 -0500 asked a question OpenVSwitch high cpu usage, dropped packets

After a power-outage, two of our compute nodes are having weird network issues with OVS. The OVS logs just show

2016-12-23T19:31:04.727Z|112808|bridge|INFO|bridge em2: added interface em2 on port 65534
2016-12-23T19:31:04.737Z|112809|bridge|INFO|bridge em2: added interface em2 on port 65534
2016-12-23T19:31:04.748Z|112810|bridge|INFO|bridge em2: added interface em2 on port 65534
2016-12-23T19:31:04.975Z|112829|bridge|INFO|bridge em2: added interface em2 on port 65534
2016-12-23T19:31:04.986Z|112830|bridge|INFO|bridge em2: added interface em2 on port 65534
2016-12-23T19:31:04.997Z|112831|bridge|INFO|bridge em2: added interface em2 on port 65534

If you look at the time stamps, you can see that these actions are just continuously happening. The ovs-vswitchd.log is thousands of thousands of lines long and so is the /etc/openvswitch/conf.db file.

I cannot figure out what is causing this behavior but it is resulting in high packet loss on those nodes.

Also, ovs-vswitchd is by far the process with the highest cpu usage.

Stopping openvswitch leads to regular networking on the server, but of course then VMs do not have networking.

2016-11-18 21:50:49 -0500 commented question Openvswitch agent not creating internal ports after restart

Problem solved, thank you for all of your help!

2016-11-18 04:29:34 -0500 received badge  Notable Question (source)
2016-11-17 08:52:23 -0500 received badge  Editor (source)
2016-11-17 08:08:46 -0500 commented question how to refresh the page header after edit horizon

When I changed the logo for horizon it took about a day before it actually used the new logo. I don't know if there's a way to make the change appear instantly or not.

2016-11-17 07:48:46 -0500 commented question Openvswitch agent not creating internal ports after restart

I also just noticed this is in /var/log/neutron/openvswitch-agent.log

Error received from [ovsdb-client monitor Interface name,ofport,external_ids --format=json]: None
Process [ovsdb-client monitor Interface name,ofport,external_ids --format=json] dies due to the error: None
2016-11-17 07:38:38 -0500 commented question Openvswitch agent not creating internal ports after restart

I've just edited my question to include them.

2016-11-17 07:33:55 -0500 commented answer Openvswitch agent not creating internal ports after restart

The port between the external bridge and the nic exists, but its not creating ports between the internal bridge and the VMs

2016-11-17 07:33:13 -0500 received badge  Popular Question (source)
2016-11-16 14:55:04 -0500 asked a question Openvswitch agent not creating internal ports after restart

SOLVED

So the problem was that the linux bridge interfaces on the host node were removed for whatever reason after the router failure, and those are not created by neutron, those are created by nova. So instead of messing with neutron and openvswitch, all I had to do was service openstack-nova-compute restart and then a final service neutron-openvswitch-agent restart and everything was fixed.

Original problem:

After a physical router failure we had to restart the network service on our neutron host machine. After doing this and even after restarting the neutron-openvswitch-agent and neutron-l3-agent we are unable to reach VMs on the host. This is not a problem for our other two compute nodes on the same neutron router.

I was able to figure out that openvswitch is not creating ports for the VMs on the neutron host, although they are present in /etc/openvswitch/conf.db . I cannot find errors in any logs.

In the neutron database, the ports on the neutron host have the status of "DOWN".

Is there anyway to get openvswitch to realize it needs to create the ports if restarting it does not work?

I'm not sure exactly what config files or log files are relevant, and I just hope my question isn't too messy. This is for the Mitaka release.

Here is the truncated ovs-vsctl show output on the neutron/compute node where I cannot reach VMs

    Bridge br-ex
    Port "em2"
        Interface "em2"
    Port "qg-15c8ef30-fa"
        Interface "qg-15c8ef30-fa"
            type: internal
    Port phy-br-ex
        Interface phy-br-ex
            type: patch
            options: {peer=int-br-ex}
    Port br-ex
        Interface br-ex
            type: internal
Bridge br-int
    fail_mode: secure
    Port int-br-ex
        Interface int-br-ex
            type: patch
            options: {peer=phy-br-ex}
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "tap303d9e37-5f"
        tag: 1
        Interface "tap303d9e37-5f"
            type: internal
    Port "qr-943c88cf-6c"
        tag: 1
        Interface "qr-943c88cf-6c"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    fail_mode: secure
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port "vxlan-c0a8040b"
        Interface "vxlan-c0a8040b"
            type: vxlan
            options {redacted}

And here is ovs-vsctl show on a compute node that does work.

    Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port phy-br-ex
        Interface phy-br-ex
            type: patch
            options: {peer=int-br-ex}
    Port "em1"
        Interface "em1"
Bridge br-int
    fail_mode: secure
    Port "qvoc3aced4c-a1"
        tag: 3
        Interface "qvoc3aced4c-a1"
    Port br-int
        Interface br-int
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port int-br-ex
        Interface int-br-ex
            type: patch
            options: {peer=phy-br-ex}
    Port "qvo8323110f-28"
        tag: 3
        Interface "qvo8323110f-28"
    Port "qvo265e94f3-db"
        tag: 3
        Interface "qvo265e94f3-db"
Bridge br-tun
    fail_mode: secure
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port "vxlan-c0a8040b"
        Interface "vxlan-c0a8040b"
            type: vxlan
            options: {redacted}
    Port "vxlan-c0a80402"
        Interface "vxlan-c0a80402"
            type: vxlan
            options: {redacted}
    Port br-tun
        Interface br-tun
            type: internal
ovs_version: "2.5.0"

See how the working node creates the ports on br-int that are like "qvoxxxxxxx-xx"?

I also see in the broken node's openswitch-agent.log but not on the working nodes' log:

2016-11-17 09:43:37.253 32811 WARNING stevedore.named [req-562caef6-a0cf-4c0a-8443-d1e151216af6 - - - - -] Could not load neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

If I execute ovsdb-client dump on the ... (more)