Ask Your Question

anna_g's profile - activity

2015-07-22 04:37:43 -0500 received badge  Notable Question (source)
2015-07-22 04:37:43 -0500 received badge  Famous Question (source)
2015-01-13 05:23:37 -0500 received badge  Famous Question (source)
2014-12-22 01:52:53 -0500 received badge  Famous Question (source)
2014-12-17 03:20:01 -0500 received badge  Famous Question (source)
2014-12-16 03:55:14 -0500 received badge  Notable Question (source)
2014-12-16 03:31:57 -0500 commented question devstack installation problem

were you able to find any solution to that??

2014-12-16 00:36:12 -0500 received badge  Popular Question (source)
2014-12-15 09:52:04 -0500 received badge  Student (source)
2014-12-15 07:10:49 -0500 asked a question devstack installation problem

hello, I am trying to run stack.sh and is failing with an error:

ERROR: openstackclient.shell Exception raised: python-neutronclient 2.3.5.7.g8a2349a is installed but python-neutronclient<3,>=2.3.5 is required by []

Does anybody know how to solve this error?? It occured today-i did a full installation on friday and the script completed successfylly.

Thank you in advance

2014-12-10 04:16:37 -0500 commented question cannot ping vm from outside network

i tried to do the curl but the vm cant connect to the host. i edited the routing table inside the vm adding the default via 10.24.100.1 but still no result

enter code here
2014-12-10 04:12:58 -0500 received badge  Enthusiast
2014-12-10 02:59:51 -0500 received badge  Notable Question (source)
2014-12-09 22:27:13 -0500 received badge  Popular Question (source)
2014-12-09 15:06:19 -0500 commented question cannot ping vm from outside network

Yes I have added both rules for icmp and ssh.

2014-12-09 10:47:51 -0500 asked a question cannot ping vm from outside network

Hello, I have a multinode setup with openstack juno and i have troubles connecting my vm to the outside world. I am able to ping the vm from inside the qdhcp namespace but not from the qrouter. It is also unreacheable from the host machine. I have a public network 10.24.100.0/24 and a private one 10.0.0.0/24.Some configuration details: the qrouter namespace configuration:

ip netns exec qrouter-cb225d1b-266e-4df9-8ba2-29e735eea83c ip route
default via 10.24.100.1 dev qg-19c3af60-3a
10.0.0.0/24 dev qr-38afb10a-35  proto kernel  scope link  src 10.0.0.1
10.24.100.0/24 dev qg-19c3af60-3a  proto kernel  scope link  src 10.24.100.2



ip netns exec qrouter-cb225d1b-266e-4df9-8ba2-29e735eea83c ip a
13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: qr-38afb10a-35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:88:06:8c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-38afb10a-35
    inet6 fe80::f816:3eff:fe88:68c/64 scope link
       valid_lft forever preferred_lft forever
15: qg-19c3af60-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:46:00:25 brd ff:ff:ff:ff:ff:ff
    inet 10.24.100.2/24 brd 10.24.100.255 scope global qg-19c3af60-3a
    inet6 fe80::f816:3eff:fe46:25/64 scope link
       valid_lft forever preferred_lft forever
ip netns exec qrouter-cb225d1b-266e-4df9-8ba2-29e735eea83c route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.24.100.1     0.0.0.0         UG    0      0        0 qg-19c3af60-3a
10.0.0.0        *               255.255.255.0   U     0      0        0 qr-38afb10a-35
10.24.100.0     *               255.255.255.0   U     0      0        0 qg-19c3af60-3a

and finally the ip tables:

ip netns exec qrouter-cb225d1b-266e-4df9-8ba2-29e735eea83c iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-POSTROUTING ! -i qg-19c3af60-3a ! -o qg-19c3af60-3a -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -s 10.0.0.0/24 -j SNAT --to-source 10.24.100.2
-A neutron-postrouting-bottom -j neutron-l3-agent-snat

c2e3f581-5694-4204-b5fe-1445f3c10290
    Bridge br-ex
        Port "qg-19c3af60-3a"
            Interface "qg-19c3af60-3a"
                type: internal
        Port "eth1"
            Interface "eth1"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "tap8c02fa6a-14"
            tag: 1
            Interface "tap8c02fa6a-14"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-38afb10a-35"
            tag: 1
            Interface "qr-38afb10a-35"
                type: internal
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch ...
(more)
2014-12-05 12:10:56 -0500 received badge  Popular Question (source)
2014-11-17 08:03:44 -0500 asked a question migration issue-vm stick in resize-migrating

Hello,

I have a set up with three compute nodes and i am trying to migrate a vm from one to the other. The whole process gets stuck in the resize step. In detail: nova list result: ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------------+-------------+--------------------+ | ed7fb028-01f7-40bb-b5ab-0a65795b65be | test4 | RESIZE | resize_migrating | Running | public=10.0.100.7

I see no error in the logs either in the scheduler or in the nova. In particular the message in the n-sch is: DEBUG nova.scheduler.filter_scheduler [req-10509121-84bc-4131-9e9d-bdc057a7d196 admin admin] Weighed [Weighed Host [host: parapluie-35-kavlan-6.rennes.grid5000.fr, weight: 1.0], WeighedHost [host: parapluie-8-kavlan-6.rennes.grid5000.fr, weigh t: 0.989286685778]] from (pid=14977) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:274 Any idea how to sloves this?

2014-10-31 10:03:06 -0500 received badge  Self-Learner (source)
2014-10-31 10:03:06 -0500 received badge  Teacher (source)
2014-10-31 10:00:11 -0500 received badge  Notable Question (source)
2014-10-27 13:54:35 -0500 received badge  Popular Question (source)
2014-10-27 10:00:38 -0500 answered a question issue regarding ovs br-int

thank you for the answer. The reason i am asking this is because i want to mirror the traffic on the br-int and send it to another host. I have tried to mirror all traffic to a gre tunnel ( using the ovs command for mirroring to a gre tunnel) but again i only see the arp requests. Now if i setup a mirror port (snooper) on br-int this logs everything( along with the icmp requests). My issue is how to redirect this traffic from snooper to a gre tunnel to another host (that will run snort). My set up is like this :

Bridge br-int
    fail_mode: secure
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qvoe8c77d3e-de"
        tag: 1
        Interface "qvoe8c77d3e-de"
    Port br-int
        Interface br-int
            type: internal
    Port "qvo9370efb0-8f"
        tag: 1
        Interface "qvo9370efb0-8f"
    Port snooper
        Interface snooper
            type: internal
Bridge br-tun
    Port "gre-0a188503"
        Interface "gre-0a188503"
            type: gre
            options: {df_default="true", in_key=flow, local_ip="10.x.x.x", out_key=flow, remote_ip="10.x.x.x"}
    Port gremirror
        Interface gremirror
            type: gre
            options: {remote_ip="10.x.x.x"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port "gre-0a188501"
        Interface "gre-0a188501"
            type: gre
            options: {df_default="true", in_key=flow, local_ip="10.x.x.x", out_key=flow, remote_ip="10.x.x.x"}
    Port br-tun
        Interface br-tun
            type: internal

what i wanna do is send the traffic from snooper to gremirror ( i am suspecting it has something to do with adding a flow rule to br-tun but i cant seem to get how)

2014-10-27 08:46:33 -0500 asked a question issue regarding ovs br-int

Hello,

I have a multinode set up with openstack and ovs. I am using gre tunneling for communication between vms in different hosts. When I ping vmA in host A from vmB in host B (vmA and vmB are in the same vlan) I get a normal reply. However if i do a tcpdump -n -i br-int in host A i only see the ARP requests but not the icmp ones. If i do tcpdump -n -i qvoda209351-8a (the interface that the vmA connects to) i see everything normally (both arp and icmp). Can someone tell me why this is happening and how can i make the icmp requests to appear also in br-int?

Thank you Anna