Ask Your Question
0

Problem with openvswitch and vlan

asked 2016-05-04 22:57:35 -0500

heimdull gravatar image

I'm following the provider network with vSwitch setup and I'm stuck at booting an instance. (http://docs.openstack.org/mitaka/networking-guide/scenario-provider-ovs.html (http://docs.openstack.org/mitaka/netw...))

From what I can tell it looks like dhcp is not able to go from controller node to compute nodes. I have this network setup:

controller
 - eth0: 10.1.0.11
 - eth1 tagged for vlan 200 (10.2.0.0/23)
compute1
 - eth0: 10.1.0.31
 - eth1 tagged for vlan 200 (10.2.0.0/23)
compute2
 - eth0: 10.1.0.32
 - eth1 tagged for vlan 200 (10.2.0.0/23)


root@controller:~# lsb_release -a
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.4 LTS
Release:    14.04
Codename:   trusty

this is from the console on my instance: (multiple images and compute nodes. same error)

[ 1676.128104] dhclient[279]: No working leases in persistent database - sleeping.
[ 2073.296531] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
[ 2080.745989] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10
[ 2090.782171] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10
[ 2100.403171] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 13
[ 2113.749431] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
[ 2121.037653] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
[ 2127.301099] dhclient[279]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
[ 2134.368263] dhclient[279]: No DHCPOFFERS received.

btw this is my first time trying to setup Openstack so I'm sure I'm doing something wrong.

I collected some information but I assumed that I would be able to ping the gateway at 10.2.0.4 from the provider-200 net side but I can't. Is this the reason that dhcp requests are not reaching compute nodes?

I also tried to remove openvswith and use linuxrouter with a flat network and that works fine. But for a small private cloud I'm assuming that a provider network where each compute node just talks directly to our switch is the best setup.

We are looking into replacing a setup with all esxi hosts using a similar setup. (but in esxi I can have two interfaces and send both management traffic AND prod,dev,qa,staging,other tags down the same links. and iscsi down the other links... )

root@controller:~# ip netns exec qdhcp-03882529-d73d-4ad6-a96e-f83a9147f91f ifconfig
tapb8b07cf8-ba Link encap:Ethernet  HWaddr fa:16:3e:13:4d:02
          inet addr:10.2.0.50  Bcast:10.2.1.255  Mask:255.255.254.0
          inet6 addr: fe80::f816:3eff:fe13:4d02/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-05-06 15:10:12 -0500

heimdull gravatar image

I'm not sure what the issue IS but I installed the same setup with the same network configurations on two hosts in a different datacenter and the same config files on these new hosts works.

Currently this setup is running inside a ESXi hosts so both the controller and the compute nodes are vms inside the same ESXi host. Could be a problem where the ESXi host is not passing the broadcast for dhcp... this is just a guess

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-05-04 22:55:51 -0500

Seen: 283 times

Last updated: May 06 '16