Kiruthiga's profile - activity

2017-08-11 03:31:29 -0600 received badge  Famous Question (source)
2017-08-03 12:27:12 -0600 received badge  Famous Question (source)
2017-07-18 00:19:36 -0600 received badge  Famous Question (source)
2017-07-18 00:19:36 -0600 received badge  Notable Question (source)
2017-06-09 15:35:05 -0600 received badge  Notable Question (source)
2017-04-12 03:55:22 -0600 received badge  Enthusiast
2017-04-11 05:47:09 -0600 received badge  Popular Question (source)
2017-04-10 05:35:12 -0600 asked a question Failed to bind port for packstack instance!

Hello All,

I have integrated my all-in-one packstack (liberty) with Opendaylight (Boron SR2).

After integration with ODL, when I spawn an instance it fails with error "No valid host found". When I checked the log files, I found that the port binding is failing for the instance while creation.

Error log for reference:

Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers Traceback (most recent call last): Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 720, in bindportlevel Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers driver.obj.bindport(context) Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/opendaylight/driver.py", line 92, in bindport Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers self.odldrv.bindport(context) Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers AttributeError: 'OpenDaylightDriver' object has no attribute 'bindport' Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers [req-26bdd1fb-5ab2-4c7c-8d0b-ca7dd6de9e09 4e6bda67dadd4eeca566be36a38ebbf4 73127b504d9543fdb6660480ab159aad - - -] Mechanism driver opendaylight failed in bindport Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers Traceback (most recent call last): Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 720, in _bindportlevel Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers driver.obj.bindport(context) Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/opendaylight/driver.py", line 92, in bindport Apr 10 14:22:39 packstack neutron-server: 2017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers self.odldrv.bindport(context) Apr 10 14:22:39 packstack neutron-server:017-04-10 14:22:39.522 10418 ERROR neutron.plugins.ml2.managers AttributeError: 'OpenDaylightDriver' object has no attribute 'bindport'

Please help !!

Regards, Kiruthiga

2016-11-30 08:45:43 -0600 received badge  Notable Question (source)
2016-11-18 01:55:41 -0600 received badge  Popular Question (source)
2016-11-14 08:06:48 -0600 received badge  Editor (source)
2016-11-14 07:55:08 -0600 asked a question ICMP packet flooding on Openstack

Hello everyone,

I have a three node Openstack setup. Openstack version: Kilo

OpenStack is integrated with Opendaylight controller. ODL version: Lithium SR4

After integration, the connectivity was fine and there were no issues. But when multiple networks and routers get created, and after creation of multiple instances, there are quite a few issues popping up.

  1. From router namespace, unable to ping external network.

  2. While accessing floating IPs, there are multiple duplicate packets flooding.

  3. Internal connectivity between instances is lost.

  4. Instances are getting IP address. DHCP requests from compute tap device are not reaching neutron server.

  5. Multiple warning messages on ovs logs. |WARN|system@ovs-system: lost packet on port channel 24 of handler 4

  6. No flows on br-int bridge. But, on ovs logs we see that flows are getting added / modified once we restart the services. |connmgr|INFO|br-int<->tcp:<ip>:6653: 580 flow_mods in the 2 s starting 10 s ago (580 modifications)

Can anyone please help in resolving the issue?

Thanks, Kirthi

2016-11-08 08:03:52 -0600 received badge  Nice Question (source)
2016-08-18 02:44:19 -0600 received badge  Favorite Question (source)
2016-07-15 07:44:17 -0600 received badge  Popular Question (source)
2016-07-15 07:41:02 -0600 received badge  Famous Question (source)
2016-07-15 07:06:04 -0600 received badge  Notable Question (source)
2016-07-15 07:06:04 -0600 received badge  Popular Question (source)
2016-07-06 20:07:32 -0600 received badge  Famous Question (source)
2016-06-10 04:14:45 -0600 received badge  Notable Question (source)
2016-06-09 14:30:10 -0600 received badge  Student (source)
2016-06-09 09:59:39 -0600 received badge  Popular Question (source)
2016-06-09 06:37:38 -0600 asked a question VM doesn't get IP with VXLAN and OpenvSwitch

Hi everyone,

I have OpenStack Kilo three node set up. The set up was working fine with VXLAN tunnel and OVS version 2.3.1 But now, we have changed the OVS to v2.4.90 to support Service Function Chaining via NSH.

The changes were only made in neutron plugin.ini file as mentioned in the document http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_VXLAN.pdf (http://www.qlogic.com/solutions/Docum...)

The OVS version 2.4.90 is pulled from the repo https://github.com/pritesh/ovs/tree/nsh-v8 (https://github.com/pritesh/ovs/tree/n...)

After switching to OVS 2.4.90, the instances created are not getting IP address from DHCP.

By checking tcpdump of tap device, we could see that the dhcp request from VM are reaching the neutron and compute nodes. We can also view the response from DHCP agent (on neutron node) and same can be viewed on physical NIC of compute host. Though the requests are not reaching the br-tun of compute node. We verified the above statement by mirroring the traffic on interface 'vxlan-0a00180c' of br-tun bridge. We were able to view the request for DHCP from host VM but can't see the response.

ovs-vsctl show on neutron:

d9bbdebf-4fdf-4c8f-b23c-d2145b3dedc5
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00180c"
            Interface "vxlan-0a00180c"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.24.10", out_key=flow, remote_ip="10.0.24.12"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "enp4s0f0"
            Interface "enp4s0f0"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap9cbefa2c-a7"
            tag: 1
            Interface "tap9cbefa2c-a7"
                type: internal
        Port "qg-b1bccf97-22"
            tag: 2
            Interface "qg-b1bccf97-22"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "qr-a709cc96-12"
            tag: 1
            Interface "qr-a709cc96-12"
                type: internal
    ovs_version: "2.4.90"

ovs-vsctl show on compute:

299560f0-1f08-4900-a2fd-ae6ec25d3d1a
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00180a"
            Interface "vxlan-0a00180a"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.24.12", out_key=flow, remote_ip="10.0.24.10"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "qvobba9bbf8-8b"
            tag: 1
            Interface "qvobba9bbf8-8b"
        Port "qvo030dae38-70"
            tag: 1
            Interface "qvo030dae38-70"
        Port "qvobfd22730-65"
            tag: 1
            Interface "qvobfd22730-65"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.4.90"

Any information regarding VXLAN with OVS set up would be of great help.

Thanks in advance.

Regards, Kiruthiga

2016-06-02 04:38:06 -0600 asked a question Unable to connect to external network after ODL and OpenStack integration

Hi everyone,

I have a three node OpenStack(Kilo) set up and it is been integrated to Opendaylight(Lithium).

Network node has 3 interfaces each connected to management, instance tunnel and external network. The external network interface was used to create ovs br-ex bridge.

Before integrating it with Opendaylight, the external network was working fine and I was able to access the instances using the floating IPs assigned.But the connectivity fails after the integration.

For integration I followed the below mentioned blog. http://www.hellovinoth.com/openstack-...

The router gateway interface of external network is always down and the floating IPs assigned are not reachable. However, the instances are getting internal IP address and are connected to each other.

Please help me in resolving the external network connectivity issue.

Thanks in advance!

Regards, Kirthi

2016-06-02 04:32:49 -0600 asked a question Packet not utilizing SFC flows as expected in ODL SFC

Hi everyone,

I have a simple service function chain set up in my environment. But after deployment of my Rendered Service Path(RSP), but the flows created in SFF is not utilized as expected.

I have Openstack three node architecture which is integrated to Opendaylight.

Openstack version: Kilo Opendaylight version: Lithium

SFC Set up in Openstack: Node A(192.168.100.5) Node B(192.168.100.6) SF1(192.168.100.10) SF2(192.168.100.11) SFF (Bridge br-sff on OVS)

All the instances are connected to same OVS via br-int bridge.

I have created a Service Function Forwarder(SFF) in OVS switch- br-sff.

Service Function Path is created such that the ICMP packets from Node A to Node B should flow via SF1 and SF2. SFP was deployed and RSP was created.

ACL & Classifier are created with appropriate source and destination IP address. Classifier is mapped to the SFF(br-sff).

But when pinging from Node A to Node B ping works, though packets does not use flows created in bridge br-sff (SFF).

Any help would be appreciated. Thanks in advance.

Regards, Kirthi

2015-10-09 12:58:09 -0600 received badge  Notable Question (source)
2015-10-09 12:58:09 -0600 received badge  Popular Question (source)
2015-10-09 12:58:09 -0600 received badge  Famous Question (source)
2015-08-21 01:52:37 -0600 asked a question Openstack Kilo: Instance is not assigned with IP address..!!

Hi,

I am installing Openstack kilo in centos environment. I am using neutron architecture with 1 controller, 1 compute and 1 network node. I have completed my installation till dashboard. This is the document which I was referring to for the installation http://docs.openstack.org/kilo/instal...

When am trying to create instances it is getting created without any error. But, the IP address is not allocated to the VM created. It looks fine in dashboard. But when I check using ifconfig command, there is no IP address found. And the IP address which is visible on tha dashboard is not pingable.

The log which I find in dashboard for the VM:

Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
Usage: /sbin/cirros-dhcpc <up|down>
No lease, failing
WARN: /etc/rc3.d/S40-network failed
cirros-ds 'net' up at 181.21
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 181.23. request failed
failed 2/20: up 183.28. request failed
failed 3/20: up 185.29. request failed
failed 4/20: up 187.31. request failed
failed 5/20: up 189.33. request failed
failed 6/20: up 191.35. request failed
failed 7/20: up 193.36. request failed
failed 8/20: up 195.38. request failed
failed 9/20: up 197.40. request failed
failed 10/20: up 199.42. request failed
failed 11/20: up 201.44. request failed
failed 12/20: up 203.45. request failed
failed 13/20: up 205.47. request failed
failed 14/20: up 207.49. request failed
failed 15/20: up 209.51. request failed
failed 16/20: up 211.52. request failed
failed 17/20: up 213.54. request failed
failed 18/20: up 215.56. request failed
failed 19/20: up 217.58. request failed
failed 20/20: up 219.59. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 221.61. searched: nocloud configdrive ec2
failed to get instance-id of datasource
Starting dropbear sshd: OK

I require help to resolve this issue. Kindly help with this..!!