Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Nested virtualization with KVM/OpenVSwitch

Hi all together,

I have a OpenStack Installation running on an bunch of old hardware we put together from old PCs. Now I want to set up OpenStack on a single host, but inside multiple VMs virtualized with KVM. The KVMs are connected via OpenVSwitch. The network setup is the following: Openstack Network Setup

I can successfully ping each instance over the Management network, running over intbr0. I can also ping the compute1, compute2 and network via their IP Addresses on the second NIC, which is connected to the tunbr0. A third NIC on the network node is set to 'manual' and attached to the br-ext. I then set up keystone, glance, neutron and nova. Inside the VMs I use OpenVSwitch again, which is this time managed by OpenStack, for the inter-vm traffic I use vxlan tunnels. The connection to the outer world is done via VLAN, where I implemented trunking on both OpenVSwitches (br-ext and databr0)

I was able to create multiple instances which appear correctly on the compute nodes and are shown without an error. I also created two networks - one internal Test-Customer network and one external network. I also created the needed subnets and the router and attached it to both networks correctly.

After doing so I was able to: Ping the Router IPs from Routers Namespace (Internal IP and gateway IP)

I was NOT able to: Ping the attached VMs Ping another machine in the same Subnet attached to databr0 Ping the Router Gateway from the machine attached to databr0

However, my instances got IPs from the correct subnet after booting up, so I assume some DHCP magic was working (but why is my ping failing then?).

I had a setup like that running on ESXI. I remember I had to set the vSwitch to promiscous mode. However, I did nothing like that on databr0, tunbr0 or intbr0 which is - as I assume - the reason packets may get dropped there. I found THIS article, however I'm not sure how to set that up.

If anyone could help me to get this working, I would highly appreciate it. OpenVSwitch and OpenFlow looks like some dark art to me right now and I'm stuck at this point for over a week. If you need more information, please feel free to ask :)

Greetings from Germany Micha

Nested virtualization with KVM/OpenVSwitch

Hi all together,

I have a OpenStack Installation running on an bunch of old hardware we put together from old PCs. Now I want to set up OpenStack on a single host, but inside multiple VMs virtualized with KVM. The KVMs are connected via OpenVSwitch. The network setup is the following: Openstack Network Setup

I can successfully ping each instance over the Management network, running over intbr0. I can also ping the compute1, compute2 and network via their IP Addresses on the second NIC, which is connected to the tunbr0. A third NIC on the network node is set to 'manual' and attached to the br-ext. I then set up keystone, glance, neutron and nova. Inside the VMs I use OpenVSwitch again, which is this time managed by OpenStack, for the inter-vm traffic I use vxlan tunnels. The connection to the outer world is done via VLAN, where I implemented trunking on both OpenVSwitches (br-ext and databr0)

I was able to create multiple instances which appear correctly on the compute nodes and are shown without an error. I also created two networks - one internal Test-Customer network and one external network. I also created the needed subnets and the router and attached it to both networks correctly.

After doing so I was able to: Ping the Router IPs from Routers Namespace (Internal IP and gateway IP)

I was NOT able to: Ping the attached VMs Ping another machine in the same Subnet attached to databr0 Ping the Router Gateway from the machine attached to databr0

However, my instances got IPs from the correct subnet after booting up, so I assume some DHCP magic was working (but why is my ping failing then?).

I had a setup like that running on ESXI. I remember I had to set the vSwitch to promiscous mode. However, I did nothing like that on databr0, tunbr0 or intbr0 which is - as I assume - the reason packets may get dropped there. I found THIS article, however I'm not sure how to set that up.

If anyone could help me to get this working, I would highly appreciate it. OpenVSwitch and OpenFlow looks like some dark art to me right now and I'm stuck at this point for over a week. If you need more information, please feel free to ask :)

Greetings from Germany Micha

EDIT: More infos: Oh, just remembered I can't be sure the instance got the IP - I was just checking the metadata (which may be incorrect).

The network is not configured in virsh (yet - should I do that?), I attached the instances by editing their config file and adding another interface like this:

<interface type='bridge'>   
  <mac address='52:54:00:43:68:01'/>  
  <source bridge='ostun0'/>  
  <virtualport type='openvswitch'>
    <parameters interfaceid='85dc9976-0f7b-4995-891d-c1ce2f5902a6'/>
  </virtualport>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> 
</interface>

However, I also found out that virsh apparently does some security implementation:

virsh # nwfilter-list 
 UUID                                  Name                 
------------------------------------------------------------------
 33bbc764-11b7-4ca4-aa1a-18824e34c4e6  allow-arp           
 04184b11-1b34-4b6d-b165-ca4b4e5ba95d  allow-dhcp          
 b1852242-1ad1-4532-a93a-877eaa50c4c6  allow-dhcp-server   
 2cf82839-eba7-42e9-a185-d8769ffa1426  allow-incoming-ipv4 
 21b246cc-af53-4534-bdc9-1ca7d02cba45  allow-ipv4          
 7415b3e5-56d2-4307-87db-05192550a250  clean-traffic       
 0ae14293-0feb-43b4-bfad-02becb9ed97f  no-arp-ip-spoofing  
 29b87460-fdb9-4ad6-877f-6845fa39d036  no-arp-mac-spoofing 
 42c7f827-fa04-4950-a47c-85f87093db54  no-arp-spoofing     
 3a490558-4afb-4905-be38-cd89ea3446a3  no-ip-multicast     
 8036475f-6f81-4fb7-85b7-19672c6fb4bb  no-ip-spoofing      
 1e0f16dd-7f07-4cd8-b6db-2d5d7b43d7de  no-mac-broadcast    
 4842ab50-6826-400c-9d4b-5e26a1fe0a93  no-mac-spoofing     
 9e694f78-e0d6-4d35-84eb-5bda16f471d0  no-other-l2-traffic 
 ce03eadf-4cc5-4f26-8b31-28f2d76fd879  no-other-rarp-traffic
 865e29f9-7088-4519-9dc4-57499ecb92be  qemu-announce-self  
 96416978-64bf-4a63-88e0-7e514f04d1c5  qemu-announce-self-rarp

ovs-vsctl show on the host:

user@host:~$ sudo ovs-vsctl show
73bade2a-a301-49d6-9bdc-306c92a490de
    Bridge "ostun0"
        Port "ostun0"
            Interface "ostun0"
                type: internal
        Port "vnet12"
            Interface "vnet12"
        Port "vnet10"
            Interface "vnet10"
        Port "vnet7"
            Interface "vnet7"
    Bridge "intbr0"
        Port "vnet14"
            Interface "vnet14"
        Port "vnet6"
            Interface "vnet6"
        Port "vnet5"
            Interface "vnet5"
        Port "vnet3"
            Interface "vnet3"
        Port "vnet1"
            Interface "vnet1"
        Port "vnet9"
            Interface "vnet9"
        Port "vnet11"
            Interface "vnet11"
        Port "vnet4"
            Interface "vnet4"
        Port "intbr0"
            Interface "intbr0"
                type: internal
    Bridge "databr0"
        Port "vnet8"
            trunks: [2003, 2004]
            Interface "vnet8"
        Port "vnet0"
            Interface "vnet0"
        Port "databr0"
            Interface "databr0"
                type: internal
        Port "dev1"
            Interface "dev1"
                type: internal
    ovs_version: "2.5.2"

vnet6, vnet7 and vnet8 are the ports of the network instance. vnet10 and vnet12 are the compute nodes.

and on the network node:

user@host:~$ sudo ovs-vsctl show
05cd067e-3831-4721-bd1b-c61561c5388b
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tape567d86e-f1"
            tag: 1
            Interface "tape567d86e-f1"
                type: internal
        Port "qr-1bd55b0a-f1"
            tag: 1
            Interface "qr-1bd55b0a-f1"
                type: internal
        Port "qg-3e0b069a-a3"
            tag: 2
            Interface "qg-3e0b069a-a3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        fail_mode: secure
        Port "ens9"
            trunks: [2003, 2004]
            Interface "ens9"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a8022b"
            Interface "vxlan-c0a8022b"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.43"}
        Port "vxlan-c0a8022a"
            Interface "vxlan-c0a8022a"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.42"}
    ovs_version: "2.5.2"

Nested virtualization with KVM/OpenVSwitch

Hi all together,

I have a OpenStack Installation running on an bunch of old hardware we put together from old PCs. Now I want to set up OpenStack on a single host, but inside multiple VMs virtualized with KVM. The KVMs are connected via OpenVSwitch. The network setup is the following: Openstack Network Setup

I can successfully ping each instance over the Management network, running over intbr0. I can also ping the compute1, compute2 and network via their IP Addresses on the second NIC, which is connected to the tunbr0. A third NIC on the network node is set to 'manual' and attached to the br-ext. I then set up keystone, glance, neutron and nova. Inside the VMs I use OpenVSwitch again, which is this time managed by OpenStack, for the inter-vm traffic I use vxlan tunnels. The connection to the outer world is done via VLAN, where I implemented trunking on both OpenVSwitches (br-ext and databr0)

I was able to create multiple instances which appear correctly on the compute nodes and are shown without an error. I also created two networks - one internal Test-Customer network and one external network. I also created the needed subnets and the router and attached it to both networks correctly.

After doing so I was able to: Ping the Router IPs from Routers Namespace (Internal IP and gateway IP)

I was NOT able to: Ping the attached VMs Ping another machine in the same Subnet attached to databr0 Ping the Router Gateway from the machine attached to databr0

However, my instances got IPs from the correct subnet after booting up, so I assume some DHCP magic was working (but why is my ping failing then?).

I had a setup like that running on ESXI. I remember I had to set the vSwitch to promiscous mode. However, I did nothing like that on databr0, tunbr0 or intbr0 which is - as I assume - the reason packets may get dropped there. I found THIS article, however I'm not sure how to set that up.

If anyone could help me to get this working, I would highly appreciate it. OpenVSwitch and OpenFlow looks like some dark art to me right now and I'm stuck at this point for over a week. If you need more information, please feel free to ask :)

Greetings from Germany Micha


EDIT: More infos: Oh, just remembered I can't be sure the instance got the IP - I was just checking the metadata (which may be incorrect).

The network is not configured in virsh (yet - should I do that?), I attached the instances by editing their config file and adding another interface like this:

<interface type='bridge'>   
  <mac address='52:54:00:43:68:01'/>  
  <source bridge='ostun0'/>  
  <virtualport type='openvswitch'>
    <parameters interfaceid='85dc9976-0f7b-4995-891d-c1ce2f5902a6'/>
  </virtualport>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> 
</interface>

However, I also found out that virsh apparently does some security implementation:

virsh # nwfilter-list 
 UUID                                  Name                 
------------------------------------------------------------------
 33bbc764-11b7-4ca4-aa1a-18824e34c4e6  allow-arp           
 04184b11-1b34-4b6d-b165-ca4b4e5ba95d  allow-dhcp          
 b1852242-1ad1-4532-a93a-877eaa50c4c6  allow-dhcp-server   
 2cf82839-eba7-42e9-a185-d8769ffa1426  allow-incoming-ipv4 
 21b246cc-af53-4534-bdc9-1ca7d02cba45  allow-ipv4          
 7415b3e5-56d2-4307-87db-05192550a250  clean-traffic       
 0ae14293-0feb-43b4-bfad-02becb9ed97f  no-arp-ip-spoofing  
 29b87460-fdb9-4ad6-877f-6845fa39d036  no-arp-mac-spoofing 
 42c7f827-fa04-4950-a47c-85f87093db54  no-arp-spoofing     
 3a490558-4afb-4905-be38-cd89ea3446a3  no-ip-multicast     
 8036475f-6f81-4fb7-85b7-19672c6fb4bb  no-ip-spoofing      
 1e0f16dd-7f07-4cd8-b6db-2d5d7b43d7de  no-mac-broadcast    
 4842ab50-6826-400c-9d4b-5e26a1fe0a93  no-mac-spoofing     
 9e694f78-e0d6-4d35-84eb-5bda16f471d0  no-other-l2-traffic 
 ce03eadf-4cc5-4f26-8b31-28f2d76fd879  no-other-rarp-traffic
 865e29f9-7088-4519-9dc4-57499ecb92be  qemu-announce-self  
 96416978-64bf-4a63-88e0-7e514f04d1c5  qemu-announce-self-rarp

ovs-vsctl show on the host:

user@host:~$ sudo ovs-vsctl show
73bade2a-a301-49d6-9bdc-306c92a490de
    Bridge "ostun0"
        Port "ostun0"
            Interface "ostun0"
                type: internal
        Port "vnet12"
            Interface "vnet12"
        Port "vnet10"
            Interface "vnet10"
        Port "vnet7"
            Interface "vnet7"
    Bridge "intbr0"
        Port "vnet14"
            Interface "vnet14"
        Port "vnet6"
            Interface "vnet6"
        Port "vnet5"
            Interface "vnet5"
        Port "vnet3"
            Interface "vnet3"
        Port "vnet1"
            Interface "vnet1"
        Port "vnet9"
            Interface "vnet9"
        Port "vnet11"
            Interface "vnet11"
        Port "vnet4"
            Interface "vnet4"
        Port "intbr0"
            Interface "intbr0"
                type: internal
    Bridge "databr0"
        Port "vnet8"
            trunks: [2003, 2004]
            Interface "vnet8"
        Port "vnet0"
            Interface "vnet0"
        Port "databr0"
            Interface "databr0"
                type: internal
        Port "dev1"
            Interface "dev1"
                type: internal
    ovs_version: "2.5.2"

vnet6, vnet7 and vnet8 are the ports of the network instance. vnet10 and vnet12 are the compute nodes.

and on the network node:

user@host:~$ sudo ovs-vsctl show
05cd067e-3831-4721-bd1b-c61561c5388b
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tape567d86e-f1"
            tag: 1
            Interface "tape567d86e-f1"
                type: internal
        Port "qr-1bd55b0a-f1"
            tag: 1
            Interface "qr-1bd55b0a-f1"
                type: internal
        Port "qg-3e0b069a-a3"
            tag: 2
            Interface "qg-3e0b069a-a3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        fail_mode: secure
        Port "ens9"
            trunks: [2003, 2004]
            Interface "ens9"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a8022b"
            Interface "vxlan-c0a8022b"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.43"}
        Port "vxlan-c0a8022a"
            Interface "vxlan-c0a8022a"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.41", out_key=flow, remote_ip="192.168.2.42"}
    ovs_version: "2.5.2"