Ask Your Question
0

VMware + Neutron bridging problem

asked 2014-07-24 22:54:15 -0500

AlexShumilov gravatar image

Hello! I have an ESXi cluster with 5 VM's: ActiveDirectory, vCenter, Openstack Compute, Openstack Controller, Openstack Network (three last ones were set up as in the tutorial, OS: Ubuntu LTS 14.04, Openstack version: Icehouse). After setup there was a problem - VM's could not be launched because of "Host not found error". Logs said this:

 ... >in get_neutron_network\n    raise exception.NetworkNotFoundForBridge(bridge=bridge)\n', u'NetworkNotFoundForBridge: Network could not be found for bridge br-int\n']

So, I've created three new networks in vCenter (br-tun, br-ext and br-int), attached them to corresponding VM's, put interfaces in promiscuous mode, so the error was gone, I could successfully create instances. So, the first question:

1. Is it correct that I've created these networks manually? When I was using the FlatDHCPManager, it created the br100 by himself. Also, forr example, network node has these ip -a output (5 ethernet interfaces + bridges!)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:98:57:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.46.15/24 brd 192.168.46.255 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:5714/64 scope link
   valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:98:76:75 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.15/24 brd 10.0.1.255 scope global eth1
   valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:7675/64 scope link
   valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether 00:50:56:98:71:d5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::250:56ff:fe98:71d5/64 scope link
   valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:48:2b:d2 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:48:2b:dc brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:48:2b:e6 brd ff:ff:ff:ff:ff:ff
8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 1e:57:d8:14:5f:53 brd ff:ff:ff:ff:ff:ff
9: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link ...
(more)
edit retag flag offensive close merge delete

Comments

I've just checked out one maillist, and it said that I cannot setup Neutron without VMware NSX installed - is it true? Can I use FlatDCHP manager without NSX?

AlexShumilov gravatar imageAlexShumilov ( 2014-07-25 00:35:05 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-07-25 02:59:23 -0500

AFAIK it is possible to use a neutron-plugin that is compatible with your compute virtualization technology. You could find a neutron-plugin compatibility matrix here (Table 7.4. Plug-in compatibility with Compute drivers)

If you do not have NSX (that works with Libvirt (KVM/QEMU), XenServer and VMware ESX) you could use the nova-network approach in order to get a full working openstack-vmware installation (although it has many limitations, try to find out if it fits your needs). For example, I used this technique to install a KVM + ESX openstack installation (multihypervisor installation).

edit flag offensive delete link more

Comments

Thanks a lot for reply! I guess that any plugin requires the installed NSX, so I am not able to use Neutron. However, I am thinking about your proposal, because flat network with externally accessible IP addresses suits me. So, you did not have NSX and successfully used the nova-network?

AlexShumilov gravatar imageAlexShumilov ( 2014-07-25 04:37:14 -0500 )edit

Yes, I successfully used nova-network approach in order to use ESX hosts without NSX.

Here additional info explained in a previous answer. I used in that case VMwareESXdriver and not VCDriver.

Although nova-network approach works with havana release (and maybe with icehouse, i didn't test it), I red that nova-network support will be removed. Try to read carrefully about deprecation paths of nova network (this is just a hint) and maybe think about future migrations to neutron (at the moment not an easy-to-do process).

Antonio G. gravatar imageAntonio G. ( 2014-07-25 05:04:00 -0500 )edit

Thank again for help! I was about to try VCDriver, but I figured out what the problem was:)

AlexShumilov gravatar imageAlexShumilov ( 2014-08-01 02:34:37 -0500 )edit
0

answered 2014-08-01 02:36:02 -0500

AlexShumilov gravatar image

The problem was because of I did not specified same VLAN ID for br100. So, I was bridging EXTERNAL VMware network, which had 45 VLAN, to br100 network, which did not have VLAN defined. Setting br100 as 45 VLAN ID in vCenter solved the issue.

edit flag offensive delete link more

Comments

I didn't understand what you did and how you solved. Which network solution do you use nova or neutron network by using that technique.

enis454545 gravatar imageenis454545 ( 2016-04-02 14:02:47 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-07-24 22:52:42 -0500

Seen: 1,881 times

Last updated: Aug 01 '14