Ask Your Question

laboshinl's profile - activity

2019-05-13 07:33:56 -0500 received badge  Necromancer (source)
2019-05-13 07:33:56 -0500 received badge  Self-Learner (source)
2018-07-18 01:47:19 -0500 received badge  Great Answer (source)
2016-04-08 19:34:04 -0500 received badge  Nice Answer (source)
2015-12-09 05:17:40 -0500 received badge  Taxonomist
2015-06-02 20:45:35 -0500 received badge  Good Question (source)
2014-12-01 04:01:47 -0500 received badge  Famous Question (source)
2014-10-20 12:10:12 -0500 received badge  Nice Answer (source)
2014-10-07 02:46:59 -0500 received badge  Notable Question (source)
2014-10-06 09:44:44 -0500 edited answer How to use cinder as glance default_store

Hello, I got the same problème to create an image from using cinder storgae backend for glance. Did you have a solution for your problem? glance-api configuration is :

# Info to match when looking for cinder in the service catalog
# Format is : separated values of the form:
# <service_type>:<service_name>:<endpoint_type> (string value)
cinder_catalog_info = volume:cinder:internalurl

# Override service catalog lookup with template for cinder endpoint
# e.g. http://localhost:8776/v1/%(project_id)s (string value)
cinder_endpoint_template = http://bt1shlf7:8776/v1/%(tenant_id)s
# Region name of this node (string value)
os_region_name = RegionBie

# Location of ca certicates file to use for cinder client requests
# (string value)
#cinder_ca_certificates_file = <None>

# Number of cinderclient retries on failed http calls (integer value)
cinder_http_retries = 3

# Allow to perform insecure SSL requests to cinder (boolean value)
cinder_api_insecure = True
2014-09-11 03:16:15 -0500 received badge  Popular Question (source)
2014-09-09 01:33:28 -0500 answered a question What are the steps to create a second floating ip pool?

You need to configure second l3 agent.

but I'm not sure where to execute this (network node and/or computes ?)

You could install neutron-l3-agent on any node which connected to required 'public' net.

However, configuring second agent on the same node as the first may be tricky because you will need to create corresponding init sctipt manually.

So the simpliest way is to install it on another node and configure as shown in the guide:

For the first agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is True.

handle_internal_only_routers = True
gateway_external_network_id = 2118b11c-011e-4fa5-a6f1-2ca34d372c35
external_network_bridge = br-ex

For the second (or later) agent, invoke it with the following l3_agent.ini where handle_internal_only_routers is False.

handle_internal_only_routers = False
gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8
external_network_bridge = br-ex-2
2014-09-08 17:27:54 -0500 commented question Setting up External network while installing openstack on ubuntu 12.04 server.

External network usually means a bridge br-ex with port on physical iface e.g. eth0. Where eth0 is your NIC connected to 'public' network. E.g.

NETWORK="192.168.250.0/23"   - your public subnet (eth0 is connected to this subnet)
POOL_BEGIN="192.168.250.211" - unused addresses in this subnet
POOL_END="192.168.250.230"
GATEWAY="192.168.251.254"    - your pubic subnet gate

Just create new net with 'external' flag:

EXT_NET=$(neutron net-create --tenant-id $ADMIN_ID floating-pool --router:external=True | awk '/ id /{print $4}')

And add corresponding subnet a.k.a floating pool:

neutron subnet-create --tenant-id $ADMIN_ID --allocation-pool start=$POOL_BEGIN,end=$POOL_END --gateway $GATEWAY floating-pool $NETWORK --enable_dhcp=False
2014-09-08 17:16:07 -0500 edited question Setting up External network while installing openstack on ubuntu 12.04 server.

Hi Everyone,

I'm a openstack newbie, I'm following installation of openstack three node architecture: Controller, Network and Compute on Ubuntu 12.04 server (all are VMware VMs).

But I'm stuck up on 7th module "Creating initial networks" of installation guide. Do I need a separate router for setting up external network? Or how can I achieve this?

2014-09-08 16:54:58 -0500 edited question How to add physical port e.g. eth1 to br-ex with opendaylight?

Hi All,

I'm trying to configure neutron with odl. All seems to be OK, except l3. Adding port eth1 to br-ex causes connection lost. I can see incoming packets on eth1 but not on br-ex. I want an IP configured on br-ex and without odl this configuration has been working. What am I doing wrong? Does odl requires some additional flow configuration? Should I use provider mappings or something instead of br-ex?
Any help would be appreciated.

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE="br-ex"
BOOTPROTO="none"
DNS1="8.8.8.8"
GATEWAY=192.168.251.254
IPADDR=192.168.250.210
NETMASK=255.255.254.0
ONBOOT="yes"
TYPE="OVSBridge"
DEVICETYPE="ovs"

[root@centos7 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
ONBOOT="yes"
IPV6INIT=no
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
PROMISC=yes

[root@centos7 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether a6:dc:0f:e9:92:e1 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 4a:59:82:b7:65:00 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether a6:9c:09:ea:bf:5c brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether ea:e5:c0:68:bf:43 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8087:50ff:fe35:f254/64 scope link 
       valid_lft forever preferred_lft forever
7: br-tun: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 46:f2:a9:cd:93:43 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2c1c:8ff:fe06:77ee/64 scope link 
       valid_lft forever preferred_lft forever
8: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether e6:4d:4c:3f:58:41 brd ff:ff:ff:ff:ff:ff
    inet 192.168.250.210/23 brd 192.168.251.255 scope global br-ex
       valid_lft forever preferred_lft forever

[root@centos7 ~]# ovs-vsctl show

a9be1cad-a85f-48ce-be12-c0f09c719994
    Manager "tcp:10.0.0.1:6640"
        is_connected: true
    Bridge br-ex
        Controller "tcp:10.0.0.1:6633"
            is_connected: true
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
                type: external
    Bridge br-int
        Controller "tcp:10.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Controller "tcp:10.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
    ovs_version: "2.0.0"
2014-08-23 01:53:37 -0500 received badge  Favorite Question (source)
2014-08-21 15:52:02 -0500 received badge  Good Answer (source)
2014-07-25 10:11:01 -0500 received badge  Nice Answer (source)
2014-04-01 08:42:21 -0500 received badge  Good Question (source)
2014-04-01 02:40:18 -0500 edited question neutron and external network access

Hello,

I’m setting up a test Openstack cloud environment for our testing purpose. Cloud is setup with 3 VMs(Controller node, Network node and a Compute Node) on a vmware server. In the cloud,

Network node has 3 NICs, eth0(guest data), eth1(management) and eth2(public network). 
Compute node has 2 NICs, eth0(guest data), eth1(management)
Controller node has 2 NICS, eth0( for external access) and eth1(management)

And, networking is setup with Neutron OVS with vlan.

The physical network associated with the public network is labelled as physnet1, and the physical network associated with the data network is labelled as physnet2. 'br-ex' is the network device to provide external access and br-ex is bridged to eth2 on the Network node.

-> Network node :

[root@neutron ~]# ovs-vsctl show 
4cdcb1a4-1f98-4687-b03e-bfd063fb507a
    Bridge br-ex
        Port "qg-0e7e9672-08"
            Interface "qg-0e7e9672-08"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"
        Port phy-br-ex
            Interface phy-br-ex
    Bridge "br-eth0"
        Port "phy-br-eth0"
            Interface "phy-br-eth0"
        Port "eth0"
            Interface "eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
        Port "int-br-eth0"
            Interface "int-br-eth0"
    ovs_version: "1.11.0"

-> Compute node:

[root@compute1 ~]# ovs-vsctl show 
c0861526-be5b-4bee-a071-888864360942
    Bridge br-int
        Port "int-br-eth0"
            Interface "int-br-eth0"
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth0"
        Port "eth0"
            Interface "eth0"
        Port "phy-br-eth0"
            Interface "phy-br-eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
    ovs_version: "1.11.0"

And I’ve the following configuration in /etc/neutron/plugin.ini on Network and Compute node:

-> Network node:

[OVS]
vxlan_udp_port=4789
network_vlan_ranges=physnet2:101:120
tenant_network_type=vlan
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet1:br-ex,physnet2:br-eth0

-> Compute node:

[OVS]
vxlan_udp_port=4789
network_vlan_ranges=physnet2:101:120
tenant_network_type=vlan
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet2:br-eth0

I’m able to define the public network with physical_network physnet1 but when I create a private network for a tenant with physical_network physnet2, I get the following error message:


[root@controller ~(ks_admin)]# tenant=$(keystone tenant-list|awk '/BOSH/ {print $2}')
[root@controller ~(ks_admin)]# neutron net-create --tenant-id $tenant net01           --provider:network_type vlan           --provider:physical_network physnet2           --provider:segmentation_id 101

400-{u'NeutronError': {u'message': u'Invalid input for operation: Unknown provider:physical_network physnet2.', u'type': u'InvalidInput', u'detail': u''}}

On checking, the openvswitch-agent.log on network node and compute node, I could see mapping has added for ‘physnet2’.

-> Network node:


2014-03-31 12:15:42.125 14093 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet2 to bridge br-eth0
2014-03-31 12:15:43.667 14093 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet1 to bridge br-ex
2014-03-31 12:15:46.243 14093 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Agent initialized successfully, now running...

-> Compute node:


2014-03-31 12:20:29.551 15387 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Mapping physical network physnet2 to bridge br-eth0
2014-03-31 12:20:33.153 15387 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Agent initialized successfully, now running...

Do I need to define a separate 'network_vlan_range' for physnet2 to get working?

Also, I've the external network for floating IPs(public access) for the ...

(more)
2014-03-07 09:23:44 -0500 received badge  Nice Question (source)
2014-02-19 12:10:25 -0500 received badge  Famous Question (source)
2014-01-23 18:45:03 -0500 marked best answer Why anybody can access vnc (spice) console on my compute nodes?

Executing vncviewer compute-node:5906 gives me vm console is it normal? How to avoid this?

2014-01-23 18:44:14 -0500 marked best answer Is it possible to configure neutron with VmWare ESXi as a hypervisor?

I want to use neutron instead of Nova-network with ESXi. Is it possible? If it is could someone please explain how should I prepare ESXi? Where any documentation on this can be found?

I've only found this http://docs.openstack.org/trunk/openstack-network/admin/content/nvp_plugin.html

2014-01-21 07:21:04 -0500 marked best answer Quantum (neutron) gre slow performance

I got two nodes Grizzly installation on CentOS 6.4: tn0(192.168.250.20) All openstack services tn1(192.168.250.21) quantum-openvswitch-agent, compute

My problem is that Iperf between tn1 VM's and tn0 VM's or any machines at my subnet (192.168.250.0/23) shows 140 Kb/s

Iperf between tn0 VM's and 192.168.250.0/23 shows 900 Mb/s

Can someone please explain why performance is so slow? Is it mtu problem or something? There are my configs:

[root@tn0 ~]# quantum agent-list
+--------------------------------------+--------------------+-----------------+-------+----------------+
| id                                   | agent_type         | host            | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------+-------+----------------+
| 55225703-f6d8-4a15-8632-83c0dd45ff2c | DHCP agent         | tn1.stu.neva.ru | xxx   | True           |
| 9ffbe892-42cd-409a-a6b2-199946c4def0 | DHCP agent         | tn0.stu.neva.ru | :-)   | True           |
| b261ed57-441e-4489-9fb7-a66c7e410126 | Open vSwitch agent | tn0.stu.neva.ru | :-)   | True           |
| c0d20a83-a5b9-4b39-b063-752a664c3511 | Open vSwitch agent | tn1.stu.neva.ru | :-)   | True           |
| d5dd7b83-10d1-422a-a30a-231368d26e96 | L3 agent           | tn0.stu.neva.ru | :-)   | True           |
+--------------------------------------+--------------------+-----------------+-------+----------------+

[root@tn0 ~]# ifconfig
br-ex     Link encap:Ethernet  HWaddr 00:1E:67:04:28:0B  
          inet addr:192.168.250.20  Bcast:192.168.251.255  Mask:255.255.254.0
          inet6 addr: fe80::5c64:77ff:fe27:3260/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:1383023 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1177310 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:23644682530 (22.0 GiB)  TX bytes:313492896 (298.9 MiB)

br-int    Link encap:Ethernet  HWaddr C2:28:DC:EE:AB:47  
          inet6 addr: fe80::641b:4dff:fe87:aec3/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:18442 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:804564 (785.7 KiB)  TX bytes:468 (468.0 b)

br-tun    Link encap:Ethernet  HWaddr F6:5D:52:AA:3D:41  
          inet6 addr: fe80::a458:cfff:fec3:1dee/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:8180 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:353164 (344.8 KiB)  TX bytes:468 (468.0 b)

eth0      Link encap:Ethernet  HWaddr 00:1E:67:04:28:0B  
          inet6 addr: fe80::21e:67ff:fe04:280b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1546  Metric:1
          RX packets:1055723 errors:49 dropped:0 overruns:0 frame:49
          TX packets:880976 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:329252619 (313.9 MiB)  TX bytes:294162404 (280.5 MiB)
          Interrupt:16 Memory:b1a00000-b1a20000 

eth1      Link encap:Ethernet  HWaddr 00:1E:67:04:28:0A  
          inet addr:192.168.250.22  Bcast:192.168.251.255  Mask:255.255.254.0
          inet6 addr: fe80::21e:67ff:fe04:280a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1546  Metric:1
          RX packets:12007 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes ...
(more)
2014-01-02 05:26:16 -0500 received badge  Famous Question (source)
2013-12-27 14:31:31 -0500 received badge  Popular Question (source)
2013-12-27 14:31:31 -0500 received badge  Notable Question (source)
2013-12-26 11:25:17 -0500 received badge  Famous Question (source)
2013-12-10 05:28:42 -0500 edited question Local_ip for GRE tunnels

Hi, I am installing Neutron(in my version (Grizzly) is still named quantum) and I got to the point.

On the Network node I have two NICs - one for "Management network" and one for external internet access.On the compute node I have only one - for the "Management network".I got to the point in one guide where I need to provide local_ip for both of the nodes:

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = DATA_INTERFACE_IP

My questions is - Do i need to create another interface on both of the machines and add them private addresses?

2013-12-07 17:20:34 -0500 received badge  Notable Question (source)
2013-12-06 22:36:09 -0500 edited answer Is it possible to add routes to the floating pool?

I have only found this solution.

[root@localhost ~]# ip netns exec qrouter-22366760-ae5e-4588-ba71-e8244f5f8fd8 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-70545ca6-4b
192.168.250.0   0.0.0.0         255.255.254.0   U     0      0        0 qg-cad8eab5-ef
0.0.0.0         192.168.250.1   0.0.0.0         UG    0      0        0 qg-cad8eab5-ef

[root@localhost ~]# ip netns exec qrouter-22366760-ae5e-4588-ba71-e8244f5f8fd8 ip route add 172.16.0.0/24 via 192.168.251.1

[root@localhost ~]# ip netns exec qrouter-22366760-ae5e-4588-ba71-e8244f5f8fd8 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-70545ca6-4b
172.16.0.0      192.168.251.1   255.255.255.0   UG    0      0        0 qg-cad8eab5-ef
192.168.250.0   0.0.0.0         255.255.254.0   U     0      0        0 qg-cad8eab5-ef
0.0.0.0         192.168.250.1   0.0.0.0         UG    0      0        0 qg-cad8eab5-ef

In vm:

$ traceroute 172.16.0.1

traceroute to 172.16.0.1 (172.16.0.1), 30 hops max, 46 byte packets
 1  10.0.0.1 (10.0.0.1)  5.608 ms  1.185 ms  1.236 ms
 2  192.168.251.1 (192.168.251.1)  1.553 ms  3.173 ms  3.191 ms
 3  192.168.250.1 (192.168.250.1)  3.073 ms  1.732 ms  2.388 ms
 4  195.208.117.190 (195.208.117.190)  74.664 ms  26.545 ms  93.563 ms