Ask Your Question

Priya's profile - activity

2015-04-24 04:39:10 -0600 received badge  Famous Question (source)
2015-04-23 07:06:00 -0600 received badge  Supporter (source)
2015-02-05 14:27:25 -0600 received badge  Notable Question (source)
2014-12-22 07:10:58 -0600 received badge  Popular Question (source)
2014-12-04 09:05:42 -0600 received badge  Notable Question (source)
2014-12-04 09:05:42 -0600 received badge  Famous Question (source)
2014-12-02 01:01:43 -0600 commented answer Can we create only 4094 networks using OVS?

I have not worked with two or more network nodes(neutron).Based on openstack design,there will be a tunnel created on each network creation. If i have 2 neutrons and create a network, then wont there be tunnel created for both network nodes to the associated compute nodes with the same tun_id?

2014-12-02 00:58:27 -0600 commented answer Can we create only 4094 networks using OVS?

Yes vthapar.. You had my question right. I saw your answers. i have a doubt based on your answer.

2014-11-26 06:09:25 -0600 commented answer Can we create only 4094 networks using OVS?

There are only 4096 vlan_id's. What will be the mod_vlan_id(the value we had asked in the previous question), for the 4097th isolated network created?

2014-11-26 05:56:12 -0600 commented answer Can we create only 4094 networks using OVS?

If we have more than 4093 networks on one neutron node. How neutron works then?

2014-11-26 05:15:01 -0600 commented answer Can we create only 4094 networks using OVS?

I think you are aware that there is a Neutron node on openstack setup. If it is present on your Setup, i would like you to execute this command "ovs-ofctl dump-flows br-tun". You would see flows with "mod_vlan_id=<value>". What do you mean by this? What is the maximum value of mod_vlan_id?

2014-11-26 03:59:36 -0600 answered a question Can we create only 4094 networks using OVS?

Hi,

I have still not got my point clarified.

In the neutron node, the table-2 flows of br-tun are as follows: cookie=0x0, duration=41203.999s, table=2, n_packets=96352, n_bytes=12771283, idle_age=0, priority=1,tun_id=0x1 actions=mod_vlan_vid:2,resubmit(,10) cookie=0x0, duration=41204.321s, table=2, n_packets=16933, n_bytes=3090137, idle_age=112, priority=1,tun_id=0x2 actions=mod_vlan_vid:3,resubmit(,10) cookie=0x0, duration=41205.045s, table=2, n_packets=4, n_bytes=468, idle_age=10406, priority=1,tun_id=0x3 actions=mod_vlan_vid:4,resubmit(,10)

Just please let me know what will be the actions=<val> if tun_id=0x1002?

Even in the site you have specified in http://assafmuller.com/2013/10/14/gre... states that there is one to one mapping of tun id and vlan id. The quotes under Segregation section are as follows

"Across compute nodes we use the GRE tunnel ID. As discussed previously, each tenant network is provisioned both a GRE tunnel ID and a locally significant VLAN tag. That means that incoming traffic with a GRE tunnel ID is converted to the correct local VLAN tag as can be seen in table 2. The message is then forwarded to br-int already VLAN tagged and the appropriate check can be made."

2014-11-18 23:46:51 -0600 asked a question Can ovs-dpdk process 80 bytes packet size at 1G(~15250000packets/sec) speed?

Hi

I have the development version of ovs-dpdk installed in my system. I am trying to integrate it with openstack.

But before integrating, when i try to send 80 bytes packet size at 1G speed(~1250000)

But i found that there is nearly 20% to 50% of packet loss at the reception(ovs-dpdk) end.

I am getting the error message as "APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded". Its like the packets in buffer are not transmitted completely. I tried changing BURST_TX_RETRIES value from 16 to 64. But still there is some packet loss.

When i send some erroneous packet of the same size and speed , then the statistics shows the exact 1250000 rx_errors packets

Could you please let me know what else i am missing? What i should do to fix this issue?

Listed the complete error message below

Error Message:

NIC Statistics

Port 0: 'a0:36:9f:40:4b:d4' rx_packets: 2298 rx_errors: 0 rx_bytes: 406006 tx_packets: 67 tx_errors: 0 tx_bytes: 9569 Port 1: 'a0:36:9f:40:4b:d6' rx_packets: 5722490 rx_errors: 1380319 rx_bytes: 613600720 tx_packets: 19 tx_errors: 0 tx_bytes: 2794

Vport Statistics

============= ============ ============ ============ ============ Interface rx_packets rx_dropped tx_packets tx_dropped ------------- ------------ ------------ ------------ ------------ vswitchd data 1768 0 1770 0 vswitchd ctrl 16268 0 16268 0 port1 2298 0 67 0 port2 5722490 0 19 0 port4 13 0 5722496 0 port3 67 0 2296 0 ============= ============ ============ ============ ============

Additional Switch Statistics

queue_overruns 0 pkt_mempool count 118818 ctl_mempool count 61900 APP: ovdk_datapath_flow_get(549): 0x7f8015ef88c8 APP: free_mbufs: freeing 1 mbufs from free ring OVDK01_Control_Free_Ring APP: ovdk_datapath_flow_get(549): 0x7f8015ef8f08 APP: free_mbufs: freeing 1 mbufs from free ring OVDK01_Control_Free_Ring APP: ovdk_datapath_flow_del(512): 0x7f8015ef9548 APP: Deleted flow APP: free_mbufs: freeing 1 mbufs from free ring OVDK01_Control_Free_Ring APP: ovdk_datapath_flow_get(549): 0x7f8015ef9b88 APP: free_mbufs: freeing 1 mbufs from free ring OVDK01_Control_Free_Ring APP: ovdk_datapath_flow_new(439): 0x7f8016003bc8 APP: Added flow, flow handle '0x7F8040573B80' PORT: rte_port_ivshm_reader_rx: Freeing 4 packets from IVSHM free_ring OVDK02_Packet_Free_Ring APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 10 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: free_mbufs: freeing 1 mbufs from free ring OVDK02_Control_Free_Ring APP: send_burst: Attempted to enqueue 4 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port port4, but only 0 succeeded APP: send_burst: Attempted to enqueue 32 mbufs to vhost port ... (more)

2014-11-12 05:35:51 -0600 commented answer IP is not getting assigned to OpenStack VM instance interface

Hi Hari, Thanks for the reply. The problem was with NTP synchronization. Since the compute and neutron node's time was far ahead than that of the controller node,the issue had occured.

Once NTP was properly synchronized, the issue was solved.

2014-09-24 04:10:17 -0600 received badge  Popular Question (source)
2014-09-19 10:04:59 -0600 asked a question Could anyone please help me how to debug the threads created in ceilometer?

I tried debugging ceilometer by setting break-points at its initial stage and launched the service in eclipse. But there was an error message like "error: cannot switch to a different thread". On issuing ps -aef command, there were no other processes forked relevant to the service. And on eclipse, it shows like dummy threads upto 6 has been created. Listed the errored output below.

OUTPUT

pydev debugger: starting (pid: 31555) ('Executing file ', '/usr/local/bin/ceilometer-alarm-evaluator') ('arguments:', "['/usr/local/bin/ceilometer-alarm-evaluator']") ('Connecting to ', '127.0.0.1', ':', '42178') ('Connected.',) receive cmd <-- CMD_VERSION 501 1 1.1 sending cmd --> CMD_VERSION 501 1 1.1 sending cmd --> CMD_THREAD_CREATE 103 2 <xml><thread name="pydevd.reader" id="-1"/></xml> sending cmd --> CMD_THREAD_CREATE 103 4 <xml><thread name="pydevd.writer" id="-1"/></xml> receive cmd <-- CMD_SET_BREAK 111 3 27 /opt/stack/ceilometer/ceilometer/cli.py 70 FUNCalarm_evaluator None Added breakpoint:/opt/stack/ceilometer/ceilometer/cli.py - line:70 - func_name:alarm_evaluator (id: 27) receive cmd <-- CMD_SET_PY_EXCEPTION 122 5 false;false;false;true; Exceptions to hook : [] receive cmd <-- CMD_SET_PROPERTY_TRACE 124 7
receive cmd <-- CMD_IGNORE_THROWN_EXCEPTION_AT 132 9 REPLACE: receive cmd <-- CMD_ENABLE_DONT_TRACE 133 11 true receive cmd <-- CMD_RUN 101 13
sending cmd --> CMD_THREAD_CREATE 103 6 <xml><thread name="MainThread" id="pid31555_seq2"/></xml> sending cmd --> CMD_THREAD_CREATE 103 8 <xml><thread name="Dummy-5" id="pid31555_seq1"/></xml> sending cmd --> CMD_THREAD_SUSPEND 105 10 <xml><thread id="pid31555_seq1" stop_reason="111"><frame id="140264088344072" name="alarm_evaluator" file="/opt/stack/ceilometer/ceilometer/cli.py" line="70">"</frame><frame id="140264389151776" name="&lt;module&gt;" file="/usr/local/bin/ceilometer-alarm-evaluator" line="13">"</frame><frame id="22264912" name="run" file="/home/eclipse/plugins/org.python.pydev_3.6.0.201406232321/pysrc/pydevd.py" line="1373">"</frame><frame id="140264472976320" name="&lt;module&gt;" file="/home/eclipse/plugins/org.python.pydev_3.6.0.201406232321/pysrc/pydevd.py" line="1845">"</frame></thread></xml> Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 346, in fire_timers timer() File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 58, in __call__ cb(args, kw) File "/usr/local/lib/python2.7/dist-packages/eventlet/semaphore.py", line 121, in _do_acquire waiter.switch() *error: cannot switch to a different thread

Debugging Steps Used:

I have also done many debugging steps, which i have listed as follows:

  • using pydevd
  • using pdb
  • using gdb (when i attach a process and issue "info threads", only one thread is listed)
  • by setting the following values in the elipse file "/home/eclipse/plugins/org.python.pydev_3.6.0.201406232321/pysrc/pydevd_constants.py" DEBUG_TRACE_LEVEL = 3 DEBUG_TRACE_BREAKPOINTS = 3

In whatever way i tried, i am unsuccessful in debugging the threads created in python or switching to the thread process. Is there any other way to do it? Please guide me.

2014-08-19 01:04:12 -0600 received badge  Editor (source)
2014-08-19 01:04:12 -0600 edited question Can we create only 4094 networks using OVS?

On creating a network each tap interfaces at the br-int are associated to each vlan on OVS. It indicates that there are one-to-one mappings between VLAN IDs and GRE/VXLAN tunnel IDs.

In detail, When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

Please reply what will happen specific to neutron node? because this wont be a issue at compute node.

2014-08-18 08:15:57 -0600 commented answer Can we create only 4094 networks using OVS?

Thanks again. Lets wait for the answer while researching

2014-08-18 08:07:03 -0600 received badge  Good Question (source)
2014-08-18 07:50:58 -0600 commented answer Can we create only 4094 networks using OVS?

T u l, Thanks for your answer. I found that your answer was too specific to compute node. But i am in need of answer specific to neutron node.With a single controller(single database), multiple neutron and multiple compute nodes, the vlans associated could be easily exhausted. Since on each network being created, the TAP interfaces attached br-int gets associated with each VLAN. And hence we could have only ~4095 networks in a paticular region. If there are 5000 Tenants to this region, then the networks will have exhausted and ~895 Tenants wont have networks created to them.

2014-08-18 03:57:40 -0600 commented answer Can we create only 4094 networks using OVS?

I am not sure if my question is answered. Let me make the question quite simple.

When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

2014-08-18 03:55:59 -0600 answered a question Can we create only 4094 networks using OVS?

I am not sure if my question is answered. Let me make the question quite simple.

When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

2014-08-16 13:55:43 -0600 received badge  Famous Question (source)
2014-08-16 07:12:35 -0600 received badge  Nice Question (source)
2014-08-16 02:47:41 -0600 received badge  Notable Question (source)
2014-08-15 22:53:04 -0600 received badge  Popular Question (source)
2014-08-15 16:52:21 -0600 received badge  Student (source)
2014-07-04 04:09:48 -0600 received badge  Taxonomist
2014-07-01 08:34:55 -0600 received badge  Famous Question (source)
2014-06-25 01:18:22 -0600 received badge  Enthusiast
2014-06-18 09:08:31 -0600 answered a question IP is not getting assigned to OpenStack VM instance interface

Hi,

Thanks for your reply. :) From your reply, i understand that you have got DHCP discover messages on executing "ip netns exec qdhcp-xxx tcpdump ".

But our issue is that "We saw the discover message in br-tun bridge and not in the br-int bridge at the neutron node" and so we didnot receive any packets on executing "ip netns exec qdhcp-xxx tcpdump ".Also the agent-list's status gets toggled.

Today we re-installed OVS and restarted all the services at controller,neutron and compute node. Now IP is getting assigned to VM and we can ping from the VM's console to its gateway. Toggling of agent-list still happens but at a rare case.

2014-06-18 06:21:51 -0600 received badge  Notable Question (source)
2014-06-18 00:07:12 -0600 received badge  Popular Question (source)
2014-06-17 05:17:09 -0600 answered a question Unexpected vif_type=binding_failed

If you are using icehouse release, Please try out the following in the compute node

Update the following in /etc/nova/nova.conf

[Default]

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver

Restart all the nodes services and the libvirt driver too and then try launching VM

2014-06-17 04:14:39 -0600 asked a question IP is not getting assigned to OpenStack VM instance interface

Issue : IP is not getting assigned to OpenStack VM instance interface

Openstack deployment detail : Multinode setup(Openstack networking) which includes a controller, neutron and a compute node using OVS and GRE overlay

System Configuration: (Controller Node, Neutron Node and Compute Node)

OS : Ubuntu Kernel Release : 14.04 Processor : x86_64 OVS Version : 2.0.1

Openstack Version details:

Controller node - icehouse Release, installed using apt-get Neutron node - icehouse Release, installed using devstack Compute node - icehouse Release, installed using apt-get

Issue Description:

We were able to launch the VM instance successfully via OpenStack horizon. From the horizon, we could see the IP is getting allocated for the VM. But the IP is not getting assigned to the VM's interface.

We noticed that the GRE tunnel is established successfully and the VM instance is also sending DHCP discover over the established tunnel. The following are some of the debugging steps that we carried out.

Debug Step - 1 :

After launching VM, the "virsh-list" command was executed in the compute node

root@compute11:~# virsh list

Id Name State

2 instance-0000001b running

3 instance-0000001a running

4 instance-0000001c running

Debug Step - 2 :

In the neutron node, we noticed that the namespace has been created for the instance network. root@neutron:~# ip netns qdhcp-3a40b173-1c9b-438a-bf02-bdb00c09c724

Also verified if the TAP interface for the namespace is created.

root@neutron:~# ip netns exec qdhcp-3a40b173-1c9b-438a-bf02-bdb00c09c724 ifconfig lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

tap33bc68c4-18 Link encap:Ethernet HWaddr fa:16:3e:9e:7a:26
inet addr:182.0.0.2 Bcast:182.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe9e:7a26/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:74 errors:0 dropped:0 overruns:0 frame:0 TX packets:11 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6212 (6.2 KB) TX bytes:906 (906.0 B) Note: We got the IP for VM's interface one time and we could not reproduce this behaviour subsequently.

Debug Step- 3 :

While issuing the command "neutron agent-list", we found that the agent's "alive" status is toggling between ":-)" and "xxx".

root@neutron:~# neutron agent-list Starting new HTTP connection (1): 192.168.66.61 Starting new HTTP connection (1): controller +--------------------------------------+--------------------+-----------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-----------+-------+----------------+ | 01745df9-f0d2-43df-a8e3-ff55bdf88b37 | Open vSwitch agent | neutron | :-) | True | | 30c10a47-9237-4f7a-9111-f78285ab9ff9 | Metadata agent | neutron | :-) | True | | 878206c1-011c-4501-893d-447db9b683cb | Metering agent | neutron | xxx | True | | c8f12c3c-8cc6-4fbe-acf3-62caed95e834 | L3 agent | neutron | :-) | True | | e1712b4d-2ca6-49a5-8dab-072441b4af80 | DHCP agent | neutron | xxx | True | | e6da27ae-f866-4653-ada6-04a89b801291 | Open vSwitch agent | compute11 | xxx | True | +--------------------------------------+--------------------+-----------+-------+----------------+

As a work around, we have updated the following values in /etc/neutron/neutron.conf file and restarted all nodes services.

[DEFAULT]

agent_down_time = 90

[agent]

report_interval ... (more)