Revision history [back]

click to hide/show revision 1
initial version

Mapping physical and virtual networking

I'm trying to understand the best way to set up physical networking for OpenStack, and having some troubles getting my head around how to navigate the possible options. I'm just working with a single node at the moment, so not even worrying about communication between nodes yet.

So, let's assume this is running on CentOS, although obviously that shouldn't matter much to the answers, but it's easiest to call it something. For the VMs, I want them to be able to access the internet over, let's say vlan 20 (and let's say it's 192.168.1.x), and an office network on vlan 30 (10.10.10.x). There is also a server/management vlan for the office, 40 (172.16.1.x).

  • On the CentOS box, I'd assume it generally should have just one IP, in the management vlan (172.16.1.x), right?
  • Would the physical networking to the box generally be a single (presumably teamed) trunk port? Or one for each vlan (so in this case 3)?
  • If it's a trunk port, how does the CentOS box itself get its traffic tagged, so it can communicate?
  • Or if it's one for each vlan, how does that work for the gateway on CentOS, if these vlans maybe don't all have routing between them?
  • For teaming, is it best to do that in linux or with ovs tools? (Adding bonds using ovs-ofctl, or...?)

I've tried various permutations of all of these and have had various issues. I want to concentrate on solving the ones that lie in the way of my best path, not just every possibility I come across.

Mapping physical and virtual networking

I'm trying to understand the best way to set up physical networking for OpenStack, and having some troubles getting my head around how to navigate the possible options. I'm just working with a single node at the moment, so not even worrying about communication between nodes yet.

So, let's assume this is running on CentOS, although obviously that shouldn't matter much to the answers, but it's easiest to call it something. For the VMs, I want them to be able to access the internet over, let's say vlan 20 (and let's say it's 192.168.1.x), and an office network on vlan 30 (10.10.10.x). There is also a server/management vlan for the office, 40 (172.16.1.x).

  • On the CentOS box, I'd assume it generally should have just one IP, in the management vlan (172.16.1.x), right?
  • Would the physical networking to the box generally be a single (presumably teamed) trunk port? Or one for each vlan (so in this case 3)?
  • If it's a trunk port, how does the CentOS box itself get its traffic tagged, so it can communicate?
  • Or if it's one for each vlan, how does that work for the gateway on CentOS, if these vlans maybe don't all have routing between them?
  • For teaming, is it best to do that in linux or with ovs tools? (Adding bonds using ovs-ofctl, or...?)

I've tried various permutations of all of these and have had various issues. I want to concentrate on solving the ones that lie in the way of my best path, not just every possibility I come across.

ETA: To further elaborate on questions 1 and 3: currently the CentOS box has an IP on the management vlan (40). I also want one of the VMs to be able to communication with this vlan. However, it is currently not working. Other external network communication is working, but not this one.

When I examined the traffic, I discovered that the issue is this - when a request from a VM goes out on vlan 20 (well, whatever OpenStack's internal id is, which the a flow in br-ex then mods to 20), I can see the packet on br-ex, then on the physical port, with vlan 20. The reply then comes back from the switch to the physical port on vlan 20, then to br-ex, then to br-int which flips it back to the internal id. This is all fine. But with vlan 40, since CentOS is on it as well, there's the extra complication of tagging on the CentOS packets as well.

When I first installed CentOS I was trying to do this with a sub-interface. This doesn't work with OpenStack since then the vlan on the subinterface doesn't match some of the VM traffic. So I then set the trunk port on the switch to have the same native vlan as CentOS. This worked for letting CentOS communicate as well as the VMs on the other vlans, but not the VMs on the management vlan. Since the vlan of that traffic matches the native vlan, the switch is stripping the vlan tag on egress, so it's coming back to CentOS untagged. It's then getting dropped after br-ex. I've tried adding a flow to br-ex to tag the untagged traffic, but for some reason it breaks all communication. I also tried setting the port in br-ex (using ovs-vsctl) to vlan_mode of native_untagged (also tried native_tagged) and again all communication breaks. It only works when I set it back to trunk. So I'm trying to figure out - how do I work around this, and/or what difference in design should I have to not have to work around it?

Mapping physical and virtual networking

I'm trying to understand the best way to set up physical networking for OpenStack, and having some troubles getting my head around how to navigate the possible options. I'm just working with a single node at the moment, so not even worrying about communication between nodes yet.

So, let's assume this is running on CentOS, although obviously that shouldn't matter much to the answers, but it's easiest to call it something. For the VMs, I want them to be able to access the internet over, let's say vlan 20 (and let's say it's 192.168.1.x), and an office network on vlan 30 (10.10.10.x). There is also a server/management vlan for the office, 40 (172.16.1.x).

  • On the CentOS box, I'd assume it generally should have just one IP, in the management vlan (172.16.1.x), right?
  • Would the physical networking to the box generally be a single (presumably teamed) trunk port? Or one for each vlan (so in this case 3)?
  • If it's a trunk port, how does the CentOS box itself get its traffic tagged, so it can communicate?
  • Or if it's one for each vlan, how does that work for the gateway on CentOS, if these vlans maybe don't all have routing between them?
  • For teaming, is it best to do that in linux or with ovs tools? (Adding bonds using ovs-ofctl, or...?)

I've tried various permutations of all of these and have had various issues. I want to concentrate on solving the ones that lie in the way of my best path, not just every possibility I come across.

ETA: ETA: To further elaborate on questions 1 and 3: currently the CentOS box has an IP on the management vlan (40). I also want one of the VMs to be able to communication with this vlan. However, it is currently not working. Other external network communication is working, but not this one.

When I examined the traffic, I discovered that the issue is this - when a request from a VM goes out on vlan 20 (well, whatever OpenStack's internal id is, which the a flow in br-ex then mods to 20), I can see the packet on br-ex, then on the physical port, with vlan 20. The reply then comes back from the switch to the physical port on vlan 20, then to br-ex, then to br-int which flips it back to the internal id. This is all fine. But with vlan 40, since CentOS is on it as well, there's the extra complication of tagging on the CentOS packets as well.

When I first installed CentOS I was trying to do this with a sub-interface. This doesn't work with OpenStack since then the vlan on the subinterface doesn't match some of the VM traffic. So I then set the trunk port on the switch to have the same native vlan as CentOS. This worked for letting CentOS communicate as well as the VMs on the other vlans, but not the VMs on the management vlan. Since the vlan of that traffic matches the native vlan, the switch is stripping the vlan tag on egress, so it's coming back to CentOS untagged. It's then getting dropped after br-ex. I've tried adding a flow to br-ex to tag the untagged traffic, but for some reason it breaks all communication. I also tried setting the port in br-ex (using ovs-vsctl) to vlan_mode of native_untagged (also tried native_tagged) and again all communication breaks. It only works when I set it back to trunk. So I'm trying to figure out - how do I work around this, and/or what difference in design should I have to not have to work around it?

Mapping physical and virtual networking

I'm trying to understand the best way to set up physical networking for OpenStack, and having some troubles getting my head around how to navigate the possible options. I'm just working with a single node at the moment, so not even worrying about communication between nodes yet.

So, let's assume this is running on CentOS, although obviously that shouldn't matter much to the answers, but it's easiest to call it something. For the VMs, I want them to be able to access the internet over, let's say vlan 20 (and let's say it's 192.168.1.x), and an office network on vlan 30 (10.10.10.x). There is also a server/management vlan for the office, 40 (172.16.1.x).

  • On the CentOS box, I'd assume it generally should have just one IP, in the management vlan (172.16.1.x), right?
  • Would the physical networking to the box generally be a single (presumably teamed) trunk port? Or one for each vlan (so in this case 3)?
  • If it's a trunk port, how does the CentOS box itself get its traffic tagged, so it can communicate?
  • Or if it's one for each vlan, how does that work for the gateway on CentOS, if these vlans maybe don't all have routing between them?
  • For teaming, is it best to do that in linux or with ovs tools? (Adding bonds using ovs-ofctl, or...?)

I've tried various permutations of all of these and have had various issues. I want to concentrate on solving the ones that lie in the way of my best path, not just every possibility I come across.

ETA: To further elaborate on questions 1 and 3: currently the CentOS box has an IP on the management vlan (40). I also want one of the VMs to be able to communication with this vlan. However, it is currently not working. Other external network communication is working, but not this one.

When I examined the traffic, I discovered that the issue is this - when a request from a VM goes out on vlan 20 (well, whatever OpenStack's internal id is, which the a flow in br-ex then mods to 20), I can see the packet on br-ex, then on the physical port, with vlan 20. The reply then comes back from the switch to the physical port on vlan 20, then to br-ex, then to br-int which flips it back to the internal id. This is all fine. But with vlan 40, since CentOS is on it as well, there's the extra complication of tagging on the CentOS packets as well.

When I first installed CentOS I was trying to do this with a sub-interface. This doesn't work with OpenStack since then the vlan on the subinterface doesn't match some of the VM traffic. So I then set the trunk port on the switch to have the same native vlan as CentOS. This worked for letting CentOS communicate as well as the VMs on the other vlans, but not the VMs on the management vlan. Since the vlan of that traffic matches the native vlan, the switch is stripping the vlan tag on egress, so it's coming back to CentOS untagged. It's then getting dropped after br-ex. I've tried adding a flow to br-ex to tag the untagged traffic, but for some reason it breaks all communication. I also tried setting the port in br-ex (using ovs-vsctl) to vlan_mode of native_untagged (also tried native_tagged) and again all communication breaks. It only works when I set it back to trunk. So I'm trying to figure out - how do I work around this, and/or what difference in design should I have to not have to work around it?it? Currently looking through the guide suggested below, but welcome any additional comments as I haven't found clarity yet.

Mapping physical and virtual networking

I'm trying to understand the best way to set up physical networking for OpenStack, and having some troubles getting my head around how to navigate the possible options. I'm just working with a single node at the moment, so not even worrying about communication between nodes yet.

So, let's assume this is running on CentOS, although obviously that shouldn't matter much to the answers, but it's easiest to call it something. For the VMs, I want them to be able to access the internet over, let's say vlan 20 (and let's say it's 192.168.1.x), and an office network on vlan 30 (10.10.10.x). There is also a server/management vlan for the office, 40 (172.16.1.x).

  • On the CentOS box, I'd assume it generally should have just one IP, in the management vlan (172.16.1.x), right?
  • Would the physical networking to the box generally be a single (presumably teamed) trunk port? Or one for each vlan (so in this case 3)?
  • If it's a trunk port, how does the CentOS box itself get its traffic tagged, so it can communicate?
  • Or if it's one for each vlan, how does that work for the gateway on CentOS, if these vlans maybe don't all have routing between them?
  • For teaming, is it best to do that in linux or with ovs tools? (Adding bonds using ovs-ofctl, or...?)

I've tried various permutations of all of these and have had various issues. I want to concentrate on solving the ones that lie in the way of my best path, not just every possibility I come across.

ETA: To further elaborate on questions 1 and 3: currently the CentOS box has an IP on the management vlan (40). I also want one of the VMs to be able to communication with this vlan. However, it is currently not working. Other external network communication is working, but not this one.

When I examined the traffic, I discovered that the issue is this - when a request from a VM goes out on vlan 20 (well, whatever OpenStack's internal id is, which the a flow in br-ex then mods to 20), I can see the packet on br-ex, then on the physical port, with vlan 20. The reply then comes back from the switch to the physical port on vlan 20, then to br-ex, then to br-int which flips it back to the internal id. This is all fine. But with vlan 40, since CentOS is on it as well, there's the extra complication of tagging on the CentOS packets as well.

When I first installed CentOS I was trying to do this with a sub-interface. This doesn't work with OpenStack since then the vlan on the subinterface doesn't match some of the VM traffic. So I then set the trunk port on the switch to have the same native vlan as CentOS. This worked for letting CentOS communicate as well as the VMs on the other vlans, but not the VMs on the management vlan. Since the vlan of that traffic matches the native vlan, the switch is stripping the vlan tag on egress, so it's coming back to CentOS untagged. It's then getting dropped after br-ex. I've tried adding a flow to br-ex to tag the untagged traffic, but for some reason it breaks all communication. I also tried setting the port in br-ex (using ovs-vsctl) to vlan_mode of native_untagged (also tried native_tagged) as suggestion in the guide below and again all communication breaks. It only works when I set it back to trunk. So I'm trying to figure out - how do I work around this, and/or what difference in design should I have to not have to work around it? Currently looking through the guide suggested below, but welcome any additional comments as I haven't found clarity yet.it?