Ask Your Question

Kernel_Panic's profile - activity

2017-06-07 17:04:04 -0500 received badge  Popular Question (source)
2017-06-07 17:04:04 -0500 received badge  Famous Question (source)
2017-06-07 17:04:04 -0500 received badge  Notable Question (source)
2017-01-16 02:42:15 -0500 received badge  Famous Question (source)
2016-12-24 18:38:46 -0500 received badge  Popular Question (source)
2016-12-24 18:38:46 -0500 received badge  Famous Question (source)
2016-12-24 18:38:46 -0500 received badge  Notable Question (source)
2016-04-18 14:23:26 -0500 received badge  Famous Question (source)
2016-03-12 12:50:54 -0500 answered a question Openstack Manual Installation (CentOS/RHEL) Network Questions

I agree with Mohit, On each node with access to the flat network you need to create an OVS bridge. This must be done outside of openstack with the ovs-vsctl command

ovs-vsctl add-br br252

And then add a port on the bridge to a PHYSICAL NIC. Make sure you turn off system configuration on the interface because OVS needs complete control of the hardware on a bridge. Bonding (LACP) and VLAN tagging (802.1q) WILL NOT WORK on an OVS bridge, it needs a physical port

ovs-vsctl add-port br252 eno2

Then configure your ml2 openvswitch_agent.ini for the vlans. This worked for me on CentOS7 with DVR.

cat /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.0.14
enable_tunneling = True
bridge_mappings = ext252:br252
[agent]
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True
arp_responder = True
[ml2_type_flat]
flat_networks = ext252
[ml2_type_vlan]
network_vlan_ranges = ext250,ext251,ext252
[ml2_type_gre]
tunnel_id_ranges = 1000:2000
[ml2_type_vxlan]
vni_ranges = 1000:2000
vxlan_group = 239.1.1.1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True

Then restart neutron and nova agents.

You will see your new bridge and ports in OVS.

ovs-vsctl show
.....
Bridge "br252"
        Port "phy-br252"
            Interface "phy-br252"
                type: patch
                options: {peer="int-br252"}
        Port "eno2"
            Interface "eno2"
        Port "br252"
            Interface "br252"
                type: internal
2016-02-29 15:09:40 -0500 received badge  Notable Question (source)
2016-02-24 20:57:21 -0500 received badge  Popular Question (source)
2016-02-23 22:58:37 -0500 received badge  Notable Question (source)
2016-02-22 15:05:04 -0500 received badge  Self-Learner (source)
2016-02-22 15:05:04 -0500 received badge  Necromancer (source)
2016-02-22 15:05:04 -0500 received badge  Teacher (source)
2016-02-22 11:22:28 -0500 received badge  Associate Editor (source)
2016-02-22 11:20:55 -0500 answered a question docker openstack python setup.py install roll back?!?!

I have reinstalled the impacted services, recreated their databases and debugged for 12 hours, Services will not start and even my horizon dashboard splash screen has changed, and I can no longer login as the demo user. I now believe the docker setup.py script has done irreparable damage to my controller and compute nodes, and they will never be the same again. Time to salvage my .conf wipe and rebuild them. :-(

Lesson Learned, I'll test my next container service on a standalone RDO.

2016-02-22 10:18:31 -0500 edited question Why does the controller have a linux bridge interface on the public network?

Following the RHEL 7 Liberty install docs, which are EXCELLENT by the way. Does the controller have to be bridged on the public interface? For security reasons we would not want to have the controller exposed at all to the public facing or customer facing networks. Is this really necessary? THANKS!! Configure the Linux bridge agentĀ¶

The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances including VXLAN tunnels for private networks and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

    In the [linux_bridge] section, map the public virtual network to the public physical network interface:

    [linux_bridge]
    physical_interface_mappings = public:PUBLIC_INTERFACE_NAME

    Replace PUBLIC_INTERFACE_NAME with the name of the underlying physical public network interface.

Props again for the EXCELLENT DOCUMENTATION!!

2016-02-22 01:26:16 -0500 asked a question docker openstack python setup.py install roll back?!?!

I followed the guide at https://wiki.openstack.org/wiki/Docker The docker hello world never completed to validate the install, and afterwards I could not start nova compute on the compute node or nova-api, glance and heat on the controller. It doesn't appear there is a way to uninstall the install.py script from the doc. So now my lab cloud is an unhappy camper.. Any ideas how to rollback this?? cd src/novadocker/ python setup.py install

Update, I have the nova and glance api's started on the controller, heat will not start. Seems the openstack docker setup.py touched a lot of python code haveing to do with nova, glance and heat and they will now not start.

I ran setup with "python setup.py install --record installed-files.txt" contents below

[root@controller novadocker]# cat installed-files.txt 
/usr/lib/python2.7/site-packages/novadocker/tests/__init__.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_firewall.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_driver.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/mock_client.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_vifs.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_opencontrail.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_hostinfo.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_network.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/__init__.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/hostinfo.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/network.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/vifs.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/opencontrail.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py
/usr/lib/python2.7/site-packages/novadocker/virt/docker/__init__.py
/usr/lib/python2.7/site-packages/novadocker/i18n.py
/usr/lib/python2.7/site-packages/novadocker/__init__.py
/usr/lib/python2.7/site-packages/novadocker/virt/hostutils.py
/usr/lib/python2.7/site-packages/novadocker/virt/__init__.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/test_hostutils.py
/usr/lib/python2.7/site-packages/novadocker/tests/virt/__init__.py
/usr/lib/python2.7/site-packages/novadocker/locale/nova-docker.pot
/usr/lib/python2.7/site-packages/novadocker/tests/__init__.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_firewall.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_driver.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/mock_client.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_vifs.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_opencontrail.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_hostinfo.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/test_network.pyc
/usr/lib/python2.7/site-packages/novadocker/tests/virt/docker/__init__.pyc
/usr/lib/python2.7/site-packages/novadocker/virt/docker/hostinfo.pyc
/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.pyc
/usr/lib/python2.7/site-packages/novadocker/virt/docker/network.pyc
/usr ...
(more)
2016-02-22 01:19:43 -0500 answered a question Why does the controller have a linux bridge interface on the public network?

I worked around this by using the DVR configuration with a network node. Then the controller only needs the management interface for control plane and the network and compute nodes have data plane access to the external flat network. I will have to proxy the tenants dashboard from the public side of the network node back to the controller, where admins can access their dashboard and api's from the management or another private network.

DVR is improves security, scaleability and performance, because it lets tenants create routers on the compute nodes with external access without routing to a controller. This works with VXLAN or GRE tunnels, and it's a great sandbox to learn openflow and SDN.

from this scenario: http://docs.openstack.org/liberty/net...

in this configuration: image description

Hat's off to the technical writers on this project. There is one caveat with this guide; on the compute and network nodes, the ovs configurations need to be in "/etc/neutron/plugins/ml2/openvswitch_agent.ini" the guide puts them in lm2_config.ini, which will not work. took me a long time to figure this out, I hope it saves you some time. OVS configs below.

    [root@network ~]# grep -v '^$' /etc/neutron/plugins/ml2/openvswitch_agent.ini  |grep -v '^ *#'
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.101.11
enable_tunneling = True
bridge_mappings = ext252:br252
[agent]
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True
arp_responder = True
[ml2_type_flat]
flat_networks = ext252
[ml2_type_vlan]
network_vlan_ranges = ext250,ext251,ext252
[ml2_type_gre]
tunnel_id_ranges = 1000:2000
[ml2_type_vxlan]
vni_ranges = 1000:2000
vxlan_group = 239.1.1.1
[securitygroup]
[root@network ~]#

also not in the guide, you will need to map the bridge to your physical interface with ovs-vsctl, This is handled in the linuxbridge.ini without openvswitch.

ovs-vsctl add-br br252
ovs-vsctl add-port br252 em1
2016-02-22 01:16:27 -0500 received badge  Famous Question (source)
2016-02-22 01:16:03 -0500 answered a question High Availability using Distributed Virtual Routing (DVR) with bond vlan tagged interfaces.

Testing reveals it is not possible to connect an OVS bridge to a bond, tagged or IP interface, because OVS needs full control of the physical interface without system interference. I had to take a NIC out of the bond and dedicate it to the external floating net.

2016-02-22 01:13:07 -0500 answered a question ml2 plugin ovs dvr external net binding br-ex to bond0.xxx tagged

In testing on centos7 I found that this is not possible to connect an OVS bridge to a bond, tagged ir IP interface, because OVS needs sole control of the physical interface without OS interference.

2016-02-16 09:54:43 -0500 received badge  Student (source)
2016-02-15 14:17:04 -0500 received badge  Popular Question (source)
2016-02-14 12:02:43 -0500 asked a question ml2 plugin ovs dvr external net binding br-ex to bond0.xxx tagged

My question is simple one. How do I bind my external net(s) br-x with a tagged bonded interface?

Quite a bit of confusion about this in this forum. The only way I have found is with ovs-vsctl

ovs-vsctl add-port br-ex bond.104

However, I think this should be an openstack ml2 plugin configuration option, it used to be handled with the linux bridge agent.

physical_interface_mappings = external:bond0.104,external1:bond0.251,external2:bond0.252,external4:bond0.254

Network Node ml2_conf.ini

    [ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan,gre
mechanism_drivers = openvswitch,l2population
#extension_drivers = port_security

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.101.11
enable_tunneling = True
bridge_mappings = external:br-ex,external1:br-ex1,external2:br-ex2,external4:br-ex4

[agent]
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True
arp_responder = True

## Interfaces ##

My externals are in the 198.18.x.x and my tenant vxlan's are in 10.0.101.x. This is the same on each of my controller, network and compute nodes. I understand this cant be kept on the controller neutron-server api because physical interfaces are unique to each node in the cloud, but there should be an ml2_config.ini [ovs] option for making these br-ex to physical bindings.

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 18:03:73:1c:3b:4d brd ff:ff:ff:ff:ff:ff
3: p1p1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 18:03:73:1c:3b:4d brd ff:ff:ff:ff:ff:ff
11: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 62:8b:b1:fe:5e:48 brd ff:ff:ff:ff:ff:ff
14: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether 3e:82:1f:26:ec:49 brd ff:ff:ff:ff:ff:ff
31: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP 
    link/ether 18:03:73:1c:3b:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::40d:e7ff:feb4:c1d3/64 scope link 
       valid_lft forever preferred_lft forever
32: bond0.100@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP 
    link/ether 18:03:73:1c:3b:4d brd ff:ff:ff:ff:ff:ff
    inet 10.0.100.11/24 brd 10.0.100.255 scope global bond0.100
       valid_lft forever preferred_lft forever
    inet6 fe80::1a03:73ff:fe1c:3b4d/64 scope link 
       valid_lft forever preferred_lft forever
33: bond0.101@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP 
    link/ether 18:03:73:1c:3b:4d brd ff:ff:ff:ff:ff:ff
    inet 10.0.101.11/24 brd 10 ...
(more)
2016-02-14 09:10:12 -0500 received badge  Editor (source)
2016-02-14 01:23:28 -0500 asked a question High Availability using Distributed Virtual Routing (DVR) with bond vlan tagged interfaces.

Hi, I'm following this guide: High Availability using Distributed Virtual Routing (DVR) http://docs.openstack.org/liberty/networking-guide/scenario-dvr-ovs.html (http://docs.openstack.org/liberty/net...)

Most Openstack guides reference physical interfaces; however, all of my Openstack nodes utilize LACP 802.3ad bond interfaces with 802.1q tagged interfaces for bandwidth aggregation and high availability on interface failures. I have configured all of my Openstack external and tunnel interfaces to be vlan interfaces on bond0.xxx and I have selected vxlan,gre for my tenant networks. I could see this would be an issue for tenant vlan networks because they are already tagged, and possibly flat networks. All of the agents come up on the controller, and I can ping the qrouter internal interface from inside the namespace, but not the qrouter gateway. I cannot launch an instance because there is no access to the gateway from the tenant network.

The guide also references linux_bridge_agent is needed for tenant security groups, but it looks like those configs are handled in the ml2_conf.ini under [ovs].

Question: My bridge mappings (veth's) are not being made on the network and compute nodes. Is there an option in ml2_conf.ini .[ovs] for physical_interface_mappings? Could someone give me guidance on how to configure Openstack ml2/ovs to bridge external networks to bond0.252 vlan interface?

Many thanks,

network and compute - ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan,gre
mechanism_drivers = openvswitch,l2population

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.101.11
enable_tunneling = True
bridge_mappings = external:br-ex,external1:br-ex1,external2:br-ex2,external4:br-ex4
physical_interface_mappings = external:bond0.104,external1:bond0.251,external2:bond0.252,external4:bond0.254

[agent]
l2_population = True
tunnel_types = vxlan,gre
enable_distributed_routing = True

compute node - ip address show

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen 1000
        link/ether ac:16:2d:7b:91:20 brd ff:ff:ff:ff:ff:ff
    3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP qlen 1000
        link/ether ac:16:2d:7b:91:21 brd ff:ff:ff:ff:ff:ff
    4: eno3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP qlen 1000
        link/ether ac:16:2d:7b:91:21 brd ff:ff:ff:ff:ff:ff
    5: eno4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP qlen 1000
        link/ether ac:16:2d:7b:91:21 brd ff:ff:ff:ff:ff:ff
    14: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
        link/ether ca:41:33:d6:80:36 brd ff:ff:ff:ff:ff:ff
    15: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
        link/ether 92:54:3d:fe:75:48 brd ff:ff:ff:ff:ff:ff
    18 ...
(more)
2016-02-14 01:20:39 -0500 received badge  Enthusiast