Ask Your Question

colo90's profile - activity

2017-03-16 22:34:23 -0500 received badge  Notable Question (source)
2017-03-16 22:34:23 -0500 received badge  Popular Question (source)
2015-11-25 09:54:11 -0500 received badge  Famous Question (source)
2015-04-20 14:27:18 -0500 received badge  Notable Question (source)
2015-02-06 08:50:05 -0500 received badge  Popular Question (source)
2014-12-11 06:36:58 -0500 received badge  Notable Question (source)
2014-12-11 06:36:58 -0500 received badge  Famous Question (source)
2014-08-04 15:01:31 -0500 received badge  Famous Question (source)
2014-06-26 20:52:16 -0500 received badge  Popular Question (source)
2014-03-07 15:10:27 -0500 received badge  Student (source)
2014-03-05 10:10:43 -0500 received badge  Notable Question (source)
2014-03-05 10:10:43 -0500 received badge  Popular Question (source)
2014-03-05 01:31:58 -0500 received badge  Teacher (source)
2014-03-05 01:31:58 -0500 received badge  Self-Learner (source)
2013-11-15 17:54:17 -0500 answered a question Neutron Vpnaas

Hi Guys, sorry for delay. I've check and i don't have the file vpnaas.filters It's so funny. I'm coming from oldest version of Havana. I've create the file and now all works

Thanks!

2013-11-15 17:51:17 -0500 answered a question Neutron Vpnaas

Thanks yong sheng gong, that solved my question.

2013-11-04 13:58:25 -0500 asked a question Neutron Vpnaas

Hi all! i'm trying to configure a VPN in Neutron. I'm using Ubuntu 12.04 and Cloud Archive Package. I configured the VPN following this link :

https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall (https://wiki.openstack.org/wiki/Neutr...)

root@:~# neutron vpn-service-list +--------------------------------------+-------+--------------------------------------+--------+ | id | name | router_id | status | +--------------------------------------+-------+--------------------------------------+--------+ | b47bdcbe-5339-41ac-b53b-329872586377 | MyVPN | 676e62ec-c58a-4d58-84e3-6a5c6a2aa732 | DOWN | +--------------------------------------+-------+--------------------------------------+--------+

VPN stay down and in the log, i can find this trace.

2013-11-04 13:07:41.720 8115 TRACE neutron.services.vpn.device_drivers.ipsec 2013-11-04 13:07:41.887 8115 WARNING neutron.openstack.common.loopingcall [-] task run outlasted interval by 1.875024 sec 2013-11-04 13:10:13.830 10188 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver 2013-11-04 13:10:19.347 10188 ERROR neutron.services.vpn.device_drivers.ipsec [-] Failed to enable vpn process on router 676e62ec-c58a-4d58-84e3-6a5c6a2aa732 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec Traceback (most recent call last): 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/dist-packages/neutron/services/vpn/device_drivers/ipsec.py", line 241, in enable 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec self.start() 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/dist-packages/neutron/services/vpn/device_drivers/ipsec.py", line 382, in start 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec '--virtual_private', virtual_private 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/dist-packages/neutron/services/vpn/device_drivers/ipsec.py", line 311, in _execute 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec check_exit_code=check_exit_code) 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 458, in execute 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec check_exit_code=check_exit_code) 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 62, in execute 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec raise RuntimeError(m) 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec RuntimeError: 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-676e62ec-c58a-4d58-84e3-6a5c6a2aa732', 'ipsec', 'pluto', '--ctlbase', '/var/lib/neutron/ipsec/676e62ec-c58a-4d58-84e3-6a5c6a2aa732/var/run/pluto', '--ipsecdir', '/var/lib/neutron/ipsec/676e62ec-c58a-4d58-84e3-6a5c6a2aa732/etc', '--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', '/var/lib/neutron/ipsec/676e62ec-c58a-4d58-84e3-6a5c6a2aa732/etc/ipsec.secrets', '--virtual_private', '%v4:192.168.1.0/24,%v4:10.0.0.0/24'] 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec Exit code: 99 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec Stdout: '' 2013-11-04 13:10:19.347 10188 TRACE neutron.services.vpn.device_drivers.ipsec Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec qrouter-676e62ec-c58a-4d58-84e3-6a5c6a2aa732 ipsec pluto --ctlbase /var/lib/neutron/ipsec/676e62ec-c58a-4d58-84e3-6a5c6a2aa732/var/run/pluto --ipsecdir ... (more)

2013-08-14 14:44:21 -0500 answered a question noVNC Proxy does not work

Hi Tiantian, i have 3 controller nodes, configured with pacemaker. This is the IP : 172.16.20.2,172.16.20.3, 172.16.20.4 172.16.20.1 is the VIP of the Haproxy installation, configured in front of pacemaker. And this is the IP for the Compute node : 172.16.20.21,172.16.20.22,etc

2013-08-14 13:38:50 -0500 asked a question noVNC Proxy does not work

Hi All! I have some problems with noVNC Proxy. Can't connect to VM, error "Failed to connect to server (code: 1006)" This is my configuration :

From nova.conf in controller nodes

Vnc configuration

novnc_enabled=true novncproxy_base_url=https://cm.DOMAIN.com:6080/vnc_auto.html novncproxy_port=6081 vncserver_proxyclient_address=172.16.20.1 vncserver_listen=172.16.20.1

From nova.conf in compute nodes

Vnc configuration

novnc_enabled=true novncproxy_base_url=https://cm.DOMAIN.com:6080/vnc_auto.html novncproxy_port=6081 vncserver_proxyclient_address=172.16.20.1 vncserver_listen=172.16.20.1

You can find some logs here :

http://paste.openstack.org/show/44117/

I'm using Openstack 2013.1.2 and Canonical Cloud Archive. The OS is Ubuntu 12.04.2 I have already verified all the configuration, i don't find any misconfiguration. Can anybody help me?

Thanks

2013-05-16 10:19:28 -0500 answered a question Instance failed network setup

Thanks for support, i found the problem. Seems that nova ignore part of my nova.conf, precisely after this line "[keystone_authtoken]". After "[keystone_authtoken]" there was my quantum config option. I moved all quantum configuration before this line. Now all works fine.

2013-05-16 10:18:16 -0500 answered a question Instance failed network setup

Thanks yong sheng gong, that solved my question.

2013-05-15 15:28:33 -0500 asked a question Instance failed network setup

Hi All, i have a Openstack Grizzly installation with Ubuntu 12.04 and cloud archive packages update to the last version Recently i get an error when i'm try to launch new instances. The instance stop on state "networking", this is the nova-compute log :

2013-05-15 16:26:17.711 5588 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 939e148bea1644498babdfcd55759034 multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586 2013-05-15 16:26:17.711 5588 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is ce3b99d23c0f4155a710765bfad2dd5a. _add_unique_id /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337 2013-05-15 16:26:22.921 DEBUG nova.openstack.common.rpc.common [req-cab25ef2-a3c6-465b-8c0d-d7f8519bf82f 20b301de7d514d808403a3c1c597eecb b7929874d1fb4ebfacb14a16aa141cd2] Timed out waiting for RPC response: timed out _error_callback /usr/lib/python2.7/dist-packages/nov a/openstack/common/rpc/impl_kombu.py:628 2013-05-15 16:26:22.927 ERROR nova.compute.manager [req-cab25ef2-a3c6-465b-8c0d-d7f8519bf82f 20b301de7d514d808403a3c1c597eecb b7929874d1fb4ebfacb14a16aa141cd2] [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] Instance failed network setup 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] Traceback (most recent call last): 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1057, in _allocate_network 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] security_groups=security_groups) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 89, in wrapped 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] return func(self, context, args, *kwargs) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 46, in wrapper 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] res = f(self, context, args, *kwargs) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 270, in allocate_for_instance 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] nw_info = self.network_rpcapi.allocate_for_instance(context, **args) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 171, in allocate_for_instance 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] topic=topic, version='1.9') 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py", line 80, in call 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] return rpc.call(context, self._get_topic(topic), msg, timeout) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py", line 140, in call 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] return _get_impl().call(CONF, context, topic, msg, timeout) 2013-05-15 16:26:22.927 5588 TRACE nova.compute.manager [instance: aa8a9b0b-b2da-4c00-a9c5-bb4e96d87298] File "/usr/lib/python2 ... (more)

2013-05-09 08:28:13 -0500 answered a question Possible delay in Quantum GRE and flows

Hi Darragh, thanks for support. I change my quantum server with CPU improvment and now the problem has been solved. Now the load avarage on the new server lower than older.

2013-05-06 10:34:13 -0500 answered a question Possible delay in Quantum GRE and flows

yes, i'm using namespaces.

ip netns exec qdhcp-7d4651b5-3030-4a54-a1e6-84145e77c4c4 ip link 49: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 1137: tap569e45aa-1c: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:d7:9e:1a brd ff:ff:ff:ff:ff:ff

Thanks

2013-05-06 10:06:10 -0500 answered a question Possible delay in Quantum GRE and flows

Hi Darragh, in linux does not exist, but there is in ovs.

Thanks

2013-05-06 08:06:54 -0500 answered a question Possible delay in Quantum GRE and flows

Hi Darragh, in this moment there are 16 instances. The load avarage is 4 (is too high?) and the free memory is about 1,5 GB. This is the ovs-vswitchd.log.

May 06 09:51:13|82858|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on tap569e45aa-1c device failed: No such device
May 06 09:51:20|82859|netdev|WARN|Dropped 423 log messages in last 12 seconds (most recently, 1 seconds ago) due to excessive rate
May 06 09:51:20|82860|netdev|WARN|failed to get flags for network device tap4540e0ad-99: No such device
May 06 09:51:22|82861|netdev_linux|WARN|Dropped 119 log messages in last 9 seconds (most recently, 5 seconds ago) due to excessive rate
May 06 09:51:22|82862|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on tap569e45aa-1c device failed: No such device
May 06 09:51:32|82863|netdev|WARN|Dropped 633 log messages in last 12 seconds (most recently, 1 seconds ago) due to excessive rate
May 06 09:51:32|82864|netdev|WARN|failed to get flags for network device tap6364e554-7f: No such device
May 06 09:51:36|82865|netdev_linux|WARN|Dropped 239 log messages in last 15 seconds (most recently, 6 seconds ago) due to excessive rate
May 06 09:51:36|82866|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on tap569e45aa-1c device failed: No such device
May 06 09:51:44|82867|netdev|WARN|Dropped 416 log messages in last 12 seconds (most recently, 1 seconds ago) due to excessive rate
May 06 09:51:44|82868|netdev|WARN|failed to get flags for network device tapa625aa89-fd: No such device
May 06 09:51:46|82869|netdev_linux|WARN|Dropped 119 log messages in last 9 seconds (most recently, 6 seconds ago) due to excessive rate
May 06 09:51:46|82870|netdev_linux|WARN|ioctl(SIOCGIFINDEX) on tap569e45aa-1c device failed: No such device

and this is the output of the command ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000ba56c69ab54d
n_tables:255, n_buffers:256
features: capabilities:0xc7, actions:0xfff
 1(patch-tun): addr:ba:9d:bd:68:35:92
     config:     0
     state:      0
 2(qr-21d03c13-85): addr:3c:11:ff:7f:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 3(qr-2e9d596d-3b): addr:3c:11:ff:7f:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 4(qr-c7c253a7-02): addr:3c:11:ff:7f:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 5(qr-7e531375-39): addr:00:87:00:00:fa:16
     config:     PORT_DOWN
     state:      LINK_DOWN
 6(qr-83b0e978-a7): addr:3c:11:ff:7f:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
 7(qr-32b06b50-91): addr:fa:16:3e:2f:6a:91
     config:     PORT_DOWN
     state:      LINK_DOWN
 8(qr-da54554e-20): addr:fa:16:3e:6a:cd:80
     config:     PORT_DOWN
     state:      LINK_DOWN
 125(tap569e45aa-1c): addr:3c:11:ff:7f:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN

All my ports are down. I already try with ovs-ofctl mod-port br-int <port> up but does not work. May be the problem?

Thanks

2013-05-03 12:45:36 -0500 answered a question Possible delay in Quantum GRE and flows

Hi Darragh, thanks for reply. no, the quantum router gateway are reached quickly, see this log. Tcpdump are made on the private interface of the quantum router.

09:35:59.143451 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 1, length 64 09:35:59.836781 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 2, length 64 09:36:00.844828 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 3, length 64 09:36:01.852905 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 4, length 64 09:36:02.355638 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 1, length 64 09:36:02.355710 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 2, length 64 09:36:02.355731 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 3, length 64 09:36:02.355760 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 4, length 64 09:36:02.854375 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 5, length 64 09:36:02.855173 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 5, length 64 09:36:03.856016 IP 212.29.130.12 > 192.168.178.2: ICMP echo request, id 15800, seq 6, length 64 09:36:03.856978 IP 192.168.178.2 > 212.29.130.12: ICMP echo reply, id 15800, seq 6, length 64

and this is the ping output :

64 bytes from 185.21.172.18: icmp_req=1 ttl=58 time=3527 ms 64 bytes from 185.21.172.18: icmp_req=2 ttl=58 time=2520 ms 64 bytes from 185.21.172.18: icmp_req=3 ttl=58 time=1512 ms 64 bytes from 185.21.172.18: icmp_req=4 ttl=58 time=504 ms 64 bytes from 185.21.172.18: icmp_req=5 ttl=58 time=2.56 ms 64 bytes from 185.21.172.18: icmp_req=6 ttl=58 time=2.69 ms 64 bytes from 185.21.172.18: icmp_req=7 ttl=58 time=3.27 ms 64 bytes from 185.21.172.18: icmp_req=8 ttl=58 time=2.47 ms 64 bytes from 185.21.172.18: icmp_req=9 ttl=58 time=2.51 ms

The reply packet flows (on the quantum server) are created 1-2 seconds(or more) later i started to ping VM and so we see the reply ping deleted. I think that this is my problem, but i don't know how to solve

Thanks

2013-05-02 09:14:53 -0500 asked a question Possible delay in Quantum GRE and flows

Hi All, i have a brand new install of Grizzly on Ubuntu 12.04 with network_type gre I have high RTT for the first packets when i'm trying to ping the VM My VM has a Private IP: 192.168.178.2 natted on Public IP: 185.21.172.18 When i try to reach public IP, i get this output :

PING 185.21.172.18 (185.21.172.18) 56(84) bytes of data.
64 bytes from 185.21.172.18: icmp_req=1 ttl=58 time=6546 ms
64 bytes from 185.21.172.18: icmp_req=2 ttl=58 time=5546 ms
64 bytes from 185.21.172.18: icmp_req=3 ttl=58 time=4546 ms
64 bytes from 185.21.172.18: icmp_req=4 ttl=58 time=3546 ms
64 bytes from 185.21.172.18: icmp_req=5 ttl=58 time=2546 ms
64 bytes from 185.21.172.18: icmp_req=6 ttl=58 time=1546 ms
64 bytes from 185.21.172.18: icmp_req=7 ttl=58 time=546 ms
64 bytes from 185.21.172.18: icmp_req=8 ttl=58 time=2.52 ms
64 bytes from 185.21.172.18: icmp_req=9 ttl=58 time=2.50 ms
64 bytes from 185.21.172.18: icmp_req=10 ttl=58 time=2.73 ms
64 bytes from 185.21.172.18: icmp_req=11 ttl=58 time=2.41 ms
64 bytes from 185.21.172.18: icmp_req=12 ttl=58 time=2.67 ms

Seems that the flows are create with some delay. The output of the command : ovs-dpctl dump-flows br-int | grep 212.29.130 as soon as i start the ping is blank when ping start work the flows are created succesfull

ovs-dpctl dump-flows br-int | grep 212.29.130

in_port(80),eth(src=fa:16:3e:99:1b:4c,dst=fa:16:3e:da:15:4c),eth_type(0x0800),ipv4(src=212.29.130.12,dst=192.168.178.2,proto=1,tos=0,ttl=58,frag=no),icmp(type=8,code=0), packets:7, bytes:686, used:0.276s, actions:push_vlan(vid=11,pcp=0),65

in_port(65),eth(src=fa:16:3e:da:15:4c,dst=fa:16:3e:99:1b:4c),eth_type(0x8100),vlan(vid=11,pcp=0),encap(eth_type(0x0800),ipv4(src=192.168.178.2,dst=212.29.130.12,proto=1,tos=0,ttl=64,frag=no),icmp(type=0,code=0)), packets:1, bytes:98, used:0.276s, actions:pop_vlan,80

if i try to ping again the VM (when the flow is created) ping works properly, otherwise latency problems occurs again.

Am I doing something wrong?Can anyone help with this? if you need other information, don't hesitate to ask Thanks