Ask Your Question
1

Openstack deployment on two nodes with one NIC

asked 2015-07-10 10:18:31 -0500

Francesco gravatar image

updated 2015-07-21 11:30:19 -0500

Hi all, although I have some knowledge on the Openstack Architecture, I am quite new to its actual deployment.

I am trying install its basic components (Nova, Glance and Neutron) on a small cluster formed of two PCs, each having a single NIC available.

I am following this document guide: http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_overview.html (http://docs.openstack.org/icehouse/in...)

as I have Fedora 21 installed on both the machines.

I am trying to adapt the process as in that document three nodes are expected to be used, each having more than 1 NIC. I would like to deploy the network manager on the controller node.

I thought creating virtual NICs could have been a workaround but that did not work.

Could anybody provide a small guide on how to modify the network configuration (e.g. creating virtual NICs, linux bridges, etc) but still using that guide as reference?

Thanks in advance, Francesco.

UPDATE 13/07/2015

I have switched to CentOS 7.1 and I still have some doubts regarding the network configuration.

Both the machines (Controller/Network and Compute) are connected to the same private network 10.20.151.0/24, which is routed to Internet through the gw 10.20.151.1. On both machines, the NIC connected to this network is em1. DHCP is disabled on this network.

I would like to a have the VM instances connected to a different network (e.g. 192.168.1.0/24) and have IP addresses assigned through DHCP. I would also like each VM instance to be connected to Internet and being accessible via ssh from the Controller and/or computing node.

Could you please help me to figure out how to change the settings in the answer file to achieve that?

Thanks very much, Francesco.

UPDATE 14/07/2015 After setting up the networks, subnets and router as advised, I cannot ping/ssh any VM instances. Furthermore I cannot ping any external IP from the VMs other than the virtual router (10.20.151.20) and the Network/Controller node.

On the Controller/Network:

ovs-vsctl show
91296ee0-4e36-420e-ba86-06995f4dba69
    Bridge br-int
        fail_mode: secure
        Port "qr-c15092f4-bb"
            tag: 1
            Interface "qr-c15092f4-bb"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tapf3d9ab30-3e"
            tag: 1
            Interface "tapf3d9ab30-3e"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "em1"
            Interface "em1"
        Port "qg-b9d315ce-54"
            Interface "qg-b9d315ce-54"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tun
        fail_mode: secure                                                                                                                                    
        Port patch-int                                                                                                                                       
            Interface patch-int                                                                                                                              
                type: patch                                                                                                                                  
                options: {peer=patch-tun}                                                                                                                    
        Port br-tun                                                                                                                                          
            Interface br-tun                                                                                                                                 
                type: internal                                                                                                                               
        Port "vxlan-0a14972f"                                                                                                                                
            Interface "vxlan-0a14972f"                                                                                                                       
                type: vxlan                                                                                                                                  
                options: {df_default="true", in_key=flow, local_ip="10.20.151.48", out_key=flow, remote_ip="10.20.151.47"}                                   
    ovs_version: "2.3.1"



cat /etc/sysconfig/network-scripts/ifcfg-em1 
#HWADDR=74:E6:E2:DA:3A:61
DEVICE="em1"
ONBOOT=yes
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE="br-ex"
#BOOTPROTO=none
#IPADDR=10 ...
(more)
edit retag flag offensive close merge delete

Comments

Please run ( and post next UPDATE):-

# neutron subnet-list
# neutron subnet-show subnet-id  ( for each entry of output above)
dbaxps gravatar imagedbaxps ( 2015-07-14 10:29:21 -0500 )edit

Hi, please see update 15/07/2015. They seems to be as you advised. Thanks again, Francesco.

Francesco gravatar imageFrancesco ( 2015-07-15 03:00:42 -0500 )edit

Please, source tenant's credentials ( which is running VMs) and run :-

nova secgroup-list-rules default
dbaxps gravatar imagedbaxps ( 2015-07-15 04:58:20 -0500 )edit

see at the end of the latest update please. Thank you!

Francesco gravatar imageFrancesco ( 2015-07-15 05:08:27 -0500 )edit

System is not functional. You should be able ping gateway via both qdhcp and qrouter namespaces

dbaxps gravatar imagedbaxps ( 2015-07-15 08:27:13 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
1

answered 2015-07-21 07:58:43 -0500

Francesco gravatar image

updated 2015-08-25 04:19:21 -0500

UPDATE 25/08/2015

The problem has now been solved. It wasn't related to any Openstack configuration issues but to a security feature enforced by the physical switch of my lab called port security. Disabling that allowed my Openstack deployment to work as expected.

END UPDATE

Hi thanks all for the reply. I now have a clearer view on how Openstack and Neutron work.

After a week of troubleshooting I managed to have a "partial" answer to the networking issues reported above. It seems that ARP requests generated from within the qrouter and forwarded to the NIC of the controller/network node are somehow dropped by the physical switch interconnecting the PCs of my lab.

When I connect the two Openstack nodes directly using a cable I can ping both both the controller/network node and the compute node (I can ping the former only otherwise).

Does anybody have any idea of what kind of filtering is applied by the physical switch?

Thanks very much, Francesco

edit flag offensive delete link more

Comments

What is your setup ML2&OVS&VXLAN or ML2&OVS&VLAN ?
Please post ml2_conf.ini from Controller
and /etc/neutron/plugins/../../ovs_neutron_plugin.ini

dbaxps gravatar imagedbaxps ( 2015-07-21 08:36:35 -0500 )edit

When I connect the two Openstack nodes directly using a cable I can ping both both the controller/network node and the compute node

dbaxps gravatar imagedbaxps ( 2015-07-21 08:39:13 -0500 )edit

Do you use normal network cable doing that ?

dbaxps gravatar imagedbaxps ( 2015-07-21 08:40:15 -0500 )edit

Hi, please find my latest update in the original question.

Yes, I use a normal cable when connecting the two PCs. I would also like to say that I am having the same issue with a single node all-in-one RDO deployment.

Thanks very much, Francesco

Francesco gravatar imageFrancesco ( 2015-07-21 11:32:51 -0500 )edit

"Yes, I use a normal cable when connecting the two PCs" - that is evidence your network background requires some improvement. Usual cables require switch. Otherwise, you have to press out a cable in a special way.

dbaxps gravatar imagedbaxps ( 2015-07-21 12:42:27 -0500 )edit
0

answered 2015-07-20 05:54:45 -0500

itlinux gravatar image

I sent you an email you need to use the right router info

example: ip netns qdhcp-3f1137c9-cac5-47cc-af27-54624386d3fd qrouter-eeb632dc-ff88-43bb-8845-e0da17aa8da7

edit flag offensive delete link more
0

answered 2015-07-10 14:48:33 -0500

dbaxps gravatar image

updated 2015-07-15 07:12:42 -0500

UPDATE 07/15/2015

Please, run on Network Node :-
ip netns | grep private-network-id 
ip netns | grep router-id 
ip netns exec qdhcp-private-network-id route -n 
ip netns exec qrouter-router-id ifconfig 
ip netns exec qdhcp-private-network-id ping 10.20.151.1 
ip netns exec qrouter-router-id ping 10.20.151.1

UPDATE 07/14/2015

10.20.151.0/24 is supposed to be your mgmt, ext and vtep's LAN at a time
192.168.1.0/24 is supposed  to be private network ( say for tenant demo)
Create neutron router with gateway to 10.20.151.0/24  and interface to 192.168.1.0/24
# . keystonerc_admin
# neutron net-create ext --router:external  ( Kilo syntax )
# neutron subnet-create   ext  10.20.151.0/24 --enable_dhcp=False --allocation-pool  \
  start=10.20.151.10,end=10.20.151.200 --gateway 10.20.151.1
#  . keystonerc_demo
# neutron router-create router1
# neutron router-gateway-set router1 ext
# neutron net-create int
# neutron subnet-create int 192.168.1.0/24  --dns_nameservers list=true 8.8.8.8
 Created a new subnet:
  +------------------+--------------------------------------------+
  | Field            | Value                                      |
  +------------------+--------------------------------------------+
  | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
  | cidr             | 192.168.1.0/24                                |
  | dns_nameservers  | 8.8.8.8                              |
  | enable_dhcp      | True                                       |
  | gateway_ip       | 192.168.1.1                                   |
  | host_routes      |                                            |
  | id               | 04b95f13-cc77-4f9a-8206-7f6ed183461d       |
  | ip_version       | 4                                          |
  | name             |                                            |
  | network_id       | 1a4157a6-5cf2-46e3-bdea-1533c8f54cdf       |
  | tenant_id        | 2c845a6ad20e45ccb0b045cee27a9661           |
  +------------------+--------------------------------------------+
  # neutron router-interface-add router1 04b95f13-cc77-4f9a-8206-7f6ed183461d

END UPDATE

First issue is F21 it's not Ubuntu flavor and RH RDO Support on F21 seems to be broken.
You would better install CentOS 7.1 (1503) on both boxes
Then run on Controller&&Network Node :-

# yum update -y
# yum install -y https://rdoproject.org/repos/rdo-release.rpm
# yum install -y openstack-packstack

On Compute Node

# yum update -y
# yum install -y https://rdoproject.org/repos/rdo-release.rpm

Pick up only answer-file from http://bderzhavets.blogspot.com/2014/...

CONFIG_NOVA_NETWORK_PUBIF=em1
CONFIG_NOVA_NETWORK_PRIVIF=em1
CONFIG_NEUTRON_OVS_TUNNEL_IF=em1
CONFIG_KEYSTONE_SERVICE_NAME=httpd

Then run :-

packstack --answer-file=TwoNodeVXLAN.txt

It will result RDO Kilo 2 Node install - Controller&&Network + Compute ( ML2&OVS&VXLAN)

edit flag offensive delete link more

Comments

Hi, thanks for your answer. Could you please provide some details on how the network will be setup. I see I need to chance the answer file to use one interface only. Thanks again!

Francesco gravatar imageFrancesco ( 2015-07-11 02:52:36 -0500 )edit

Directives above will result external,management,vxlan tunnel's network to use the same Ethernet interface eth0 ( or em1 or enp3s0 or whatever).
Name should be the same on both boxes.

dbaxps gravatar imagedbaxps ( 2015-07-11 04:51:08 -0500 )edit

Like POC it's acceptable, for prod environment you need plugin into boxes eth1 for VXLAN endpoints.

dbaxps gravatar imagedbaxps ( 2015-07-11 04:53:20 -0500 )edit

Thanks for the answer. Do you know it that has been actually tested on two single NIC nodes? It does not seem to work for me... There is anything else I have to change in the answer file? What changes to network interfaces file are needed? Thanks and regards, Francesco.

Francesco gravatar imageFrancesco ( 2015-07-13 10:36:32 -0500 )edit

It was tested on two single nic nodes by myself. Switch to CentOS 7.1. F21 won't work for you.

dbaxps gravatar imagedbaxps ( 2015-07-13 10:53:15 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2015-07-10 10:18:31 -0500

Seen: 3,785 times

Last updated: Aug 25 '15