Ask Your Question

shimi's profile - activity

2017-01-31 17:43:16 -0500 received badge  Famous Question (source)
2016-07-25 10:15:05 -0500 received badge  Notable Question (source)
2016-07-19 01:44:46 -0500 received badge  Popular Question (source)
2016-07-18 09:47:00 -0500 asked a question Running a router inside an OpenStack VM

I'm trying to make a router VM (actually an IPsec VPN server) inside OpenStack. The OpenStack is an installation of Mirantis 7 (why not newer? 8 from some reason causes kernel OOPS to the same image that works on 7, and 9 that was just released... not tested yet).

We're using a Ceph-based installation and "Neutron with tunnelling segmentation".

Looking inside a VM at the output of 'arp -n' shows that it sees MAC addresses of other VMs in its network, so I have imagined that a simple 'route add 10.0.0.0/24 gw [other VM IP]' in Linux, then pinging 10.0.0.1 would result in those packets appearing on tcpdump running on [other VM IP] output. That does not seem to be the case.

Googling a bit, I found this document: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver (https://wiki.openstack.org/wiki/Neutr...) - which sounds relevant (I have a similar behaviour in GCE - which is solvable simply by creating VMs with a "IP Forwarding" flag enabled) - so I tried that (or at least I think I did, the instructions assume I know the precise steps, while I am merely guessing), by editing /etc/neutron/plugins/ml2/ml2_conf.ini on each and every one of the controllers in my Mirantis cluster, and adding the line:

extension_drivers = port_security

and then executing:

service neutron-server restart

which, by the way, did not have any affect at the beginning. I am assuming that maybe just one of the 3 is primary (active and 2 standby's), and as long as one with the old config was running, that kept things alive.

When I did the same on the last controller - all hell broke lose - and API calls to get the network status (openstack ... network show [netID]) returned that the API returned an error.

Then I started examining the logs, I found out that the below exception was thrown every time I tried the API call:

<166>Jul 11 16:49:35 node-25 neutron-metadata-agent 2016-07-11 16:49:35.993 17687 INFO eventlet.wsgi.server [-] (17687) accepted ''
<163>Jul 11 16:49:36 node-25 neutron-metadata-agent 2016-07-11 16:49:36.074 17687 ERROR neutron.agent.metadata.agent [-] Unexpected error.
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent Traceback (most recent call last):
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 109, in __call__
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent     instance_id, tenant_id = self._get_instance_and_tenant_id(req)
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 216, in _get_instance_and_tenant_id
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent     ports = self._get_ports(remote_address, network_id, router_id)
2016-07-11 16:49:36.074 17687 TRACE neutron.agent.metadata.agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/metadata/agent.py", line 204, in _get_ports
2016-07-11 16:49:36 ...
(more)