sgoud's profile - activity

2014-01-16 02:08:09 -0600 received badge  Nice Question (source)
2013-12-03 09:20:18 -0600 received badge  Nice Question (source)
2013-10-13 01:31:01 -0600 received badge  Famous Question (source)
2013-10-01 00:13:55 -0600 received badge  Famous Question (source)
2013-09-23 04:16:38 -0600 received badge  Notable Question (source)
2013-09-23 00:48:44 -0600 received badge  Popular Question (source)
2013-09-19 07:21:34 -0600 asked a question quantum port-create without fixed-ip

All,

There is any option to the port-create cli command to tell it not to assign an IP adress, when the net has an associated subnet ?

When I use port-create without fixed-ip option, it is using one of the subnet associated with the net.

root@nvp:~# quantum port-create 94fa8146-97a8-4411-a0b5-c1f5fd507dfd
Created a new port:
+----------------+------------------------------------------------------------------------------------+
| Field          | Value                                                                              |
+----------------+------------------------------------------------------------------------------------+
| admin_state_up | True                                                                               |
| device_id      |                                                                                    |
| device_owner   |                                                                                    |
| fixed_ips      | {"subnet_id": "7f6ca55e-3827-4442-859e-467219d5f599", "ip_address": "172.16.30.5"} |
| id             | 2b401b5a-07ff-44cd-952d-79df4f29230b                                               |
| mac_address    | fa:16:3e:36:37:2b                                                                  |
| name           |                                                                                    |
| network_id     | 94fa8146-97a8-4411-a0b5-c1f5fd507dfd                                               |
| status         | ACTIVE                                                                             |
| tenant_id      | b5cbf5e6b42943afb785ed6c576d0350                                                   |
+----------------+------------------------------------------------------------------------------------+
root@nvp:~#

Thanks,

2013-09-13 19:00:27 -0600 received badge  Notable Question (source)
2013-09-02 05:03:58 -0600 received badge  Notable Question (source)
2013-09-02 05:03:58 -0600 received badge  Famous Question (source)
2013-08-28 16:43:53 -0600 received badge  Popular Question (source)
2013-08-26 06:23:38 -0600 asked a question where is the metadata stored

I configured my metadata service and it works fine. I am able to inject ssh pub-key's to the guest VM.

From my guest VM when i execute meta-data curl cmd, I get this below output.

169.254.169.254 http port is mapped to metadata service on controller node. But i want to know where is the data stored ? is it per instance basis ? Also the below curl cmd doesn't use any instance-id, how it is mapping to particular instance ?

root@testnewrelic:~# curl http://169.254.169.254/latest/meta-data/

ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups

Thanks,

2013-08-21 08:41:48 -0600 received badge  Nice Question (source)
2013-08-08 03:57:26 -0600 received badge  Notable Question (source)
2013-08-08 03:57:26 -0600 received badge  Famous Question (source)
2013-07-19 16:52:23 -0600 received badge  Popular Question (source)
2013-07-16 08:30:59 -0600 asked a question Access to metadata fails: couldn't connect to host

This is the second time I am seeing this issue in grizzly, accessing metadata fails. Last time did a fresh installations. Till y'day it used to work properly. No changes done explicitly. But it fails.

Nothing from logs:

All services working properly.

root@os1controller:/etc/init.d# cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i status; done

quantum-dhcp-agent start/running, process 2178
quantum-l3-agent start/running, process 2182
quantum-metadata-agent start/running, process 2175
quantum-plugin-openvswitch-agent start/running, process 2154
quantum-server start/running, process 2151

root@os1controller:/etc/init.d# cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i status; done

nova-api start/running, process 2164
nova-cert start/running, process 2153
nova-compute start/running, process 2171
nova-conductor start/running, process 2177
nova-consoleauth start/running, process 2168
nova-novncproxy start/running, process 2181
nova-scheduler start/running, process 2179

root@os1controller:/etc/init.d# ps -ef | grep quantum-ns-metadata-proxy

root      4788     1  0 05:40 ?        00:00:00 /usr/bin/python /usr/local/bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/041e054a-b85b-41bc-b699-78512e9d98b9.pid --router_id=041e054a-b85b-41bc-b699-78512e9d98b9 --state_path=/var/lib/quantum --metadata_port=9697 --log-file=quantum-ns-metadata-proxy041e054a-b85b-41bc-b699-78512e9d98b9.log --log-dir=/var/log/quantum
root     10603  8919  0 06:15 pts/0    00:00:00 grep --color=auto quantum-ns-metadata-proxy

root@os1controller:/etc/init.d# ip netns exec qdhcp-97815430-6da4-415f-9c60-ee0240b2fb9a iptables -L -t nat

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         


Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

root@os1controller:/etc/init.d# ip netns exec qrouter-041e054a-b85b-41bc-b699-78512e9d98b9 iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
quantum-l3-agent-PREROUTING  all  --  anywhere             anywhere            

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
quantum-l3-agent-OUTPUT  all  --  anywhere             anywhere            

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
quantum-l3-agent-POSTROUTING  all  --  anywhere             anywhere            
quantum-postrouting-bottom  all  --  anywhere             anywhere            

Chain quantum-l3-agent-OUTPUT (1 references)
target     prot opt source               destination         
DNAT       all  --  anywhere             10.2.113.70          to:172.16.0.11
DNAT       all  --  anywhere             10.2.113.76          to:172.16.0.16
DNAT       all  --  anywhere             10.2.113.77          to:172.16.0.15
DNAT       all  --  anywhere             10.2.113.74          to:172.16.0.10
DNAT       all  --  anywhere             10.2.113.83          to:172.16.0.21
DNAT       all  --  anywhere             10.2.113.73          to:172.16.0.20

Chain quantum-l3-agent-POSTROUTING (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             ! ctstate DNAT

Chain quantum-l3-agent-PREROUTING (1 references)
target     prot opt source               destination         
REDIRECT   tcp  --  anywhere             169.254.169.254      tcp dpt:http redir ports 9697
DNAT       all  --  anywhere             10.2.113.70          to:172.16.0.11
DNAT       all  --  anywhere             10.2.113.76          to:172.16.0.16
DNAT       all  --  anywhere             10.2.113.77          to:172.16.0.15
DNAT       all  --  anywhere             10.2.113.74          to:172.16.0.10
DNAT       all  --  anywhere             10.2.113.83          to:172.16.0.21
DNAT       all ...
(more)
2013-07-08 05:29:30 -0600 received badge  Popular Question (source)
2013-06-30 18:36:44 -0600 received badge  Famous Question (source)
2013-06-27 07:15:54 -0600 asked a question feature to attach USB on Guest VM

I am using grizzly. I have a requirement to attach USB to guest VM. Is this feature committed yet ?

Tried following link, but not succeeded. Is this the correct procedure...

https://bugs.launchpad.net/openstack-manuals/+bug/1106421

root@nvp:/home/nvp# glance image-update 59ce4586-0c57-426c-a181-819ec8b3ee91 --property disk_bus=usb
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| Property 'disk_bus' | usb                                  |
| checksum            | d97251713cbf11ff1f86b6bf203defcd     |
| container_format    | bare                                 |
| created_at          | 2013-06-15T14:10:02                  |
| deleted             | False                                |
| deleted_at          | None                                 |
| disk_format         | qcow2                                |
| id                  | 59ce4586-0c57-426c-a181-819ec8b3ee91 |
| is_public           | True                                 |
| min_disk            | 0                                    |
| min_ram             | 0                                    |
| name                | CentosImage                          |
| owner               | 7cf64e15cb1349d6a5e81ed0d0bbd6fb     |
| protected           | False                                |
| size                | 5368709120                           |
| status              | active                               |
| updated_at          | 2013-06-27T10:05:21                  |
+---------------------+--------------------------------------+
root@nvp:/home/nvp#

Some more info: May i was doing wrong procedure. Actual requirement is, I am attaching a USB on Host machine and I want to access that USB on GUEST VM.

Appreciate your help,

2013-06-19 13:24:49 -0600 received badge  Notable Question (source)
2013-06-16 20:57:37 -0600 received badge  Popular Question (source)
2013-06-14 15:19:20 -0600 received badge  Editor (source)
2013-06-14 15:16:41 -0600 asked a question How to upgrade quantum package to 2013.1.2

Seems My Quantum packages are 2013.1.1-0ubuntu1~cloud0_all.deb.. If i want to upgrade to 2013.1.2,

  1. download *tar.gz from this link https://launchpad.net/quantum/+milestone/2013.1.2

  2. how do i install these openstack packages ? python setup.py install ?

My actual python packages are @ /usr/lib/python2.7/dist-packages/

2013-06-14 14:33:05 -0600 commented answer accessing vm metadata fails with grizzly

thanks for your inputs, I want to upgrade the quantum package to 2013.1.2. I got the *tar.gz. How do i install ? "just setup.py install or setup.py install <path>" ? my python packages path is /usr/lib/python2.7/dist-packages/

2013-06-14 14:01:48 -0600 commented answer accessing vm metadata fails with grizzly

Seems My Quantum packages are 2013.1.1-0ubuntu1~cloud0_all.deb.. If i want to upgrade to 2013.1.2, 1. download *tar.gz from this link https://launchpad.net/quantum/+milestone/2013.1.2 2. how do i install these openstack packages ? python setup.py install ?

2013-06-14 05:15:43 -0600 received badge  Famous Question (source)
2013-06-13 04:05:54 -0600 received badge  Notable Question (source)
2013-06-12 06:12:35 -0600 received badge  Student (source)
2013-06-12 05:41:50 -0600 received badge  Popular Question (source)
2013-06-12 05:38:56 -0600 answered a question accessing vm metadata fails with grizzly

Yes I added enabled_apis now, but still seeing same issue root@server14:/var/log/quantum# cat /etc/nova/nova.conf | grep enabled_ enabled_apis=osapi_compute,metadata root@server14:/var/log/quantum#

vm console logs:

cloudinit start running: Wed, 12 Jun 2013 10:19:01 +0000. up 4.55 seconds 20130612 10:19:53,699 util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [51/120s]: socket timeout [timed out] 20130612 10:20:44,752 util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [102/120s]: socket timeout [timed out] 20130612 10:21:01,771 util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [119/120s]: socket timeout [timed out] 20130612 10:21:02,773 DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds

nova-api logs:

2013-06-12 02:29:34.083 INFO nova.osapi_compute.wsgi.server [req-d74aed60-8bc0-48a6-82b3-7429d8d23f39 360105b9c9eb4897b94e5f2a5c7d027c 3e5e1e66158340e1913bc6f9bd0abf55] 10.2.113.12 "GET /v2/3e5e1e66158340e1913bc6f9bd0abf55/servers/2c7f0d85-0b88-43a0-988f-b837aa259feb/os-security-groups HTTP/1.1" status: 200 len: 855 time: 0.0681939

2013-06-12 02:30:28.581 17737 INFO nova.metadata.wsgi.server [-] (17737) accepted ('10.2.113.12', 45552)

2013-06-12 02:30:37.046 17737 INFO nova.api.ec2 [-] 72.494657s 10.2.113.12 GET /2009-04-04/meta-data/instance-id None:None 200 [Python-httplib2/0.7.2 (gzip)] text/plain text/html 2013-06-12 02:30:37.047 17737 INFO nova.metadata.wsgi.server [-] 172.16.0.22,10.2.113.12 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 126 time: 72.4957850

2013-06-12 02:31:30.140 17737 INFO nova.api.ec2 [-] 61.558082s 10.2.113.12 GET /2009-04-04/meta-data/instance-id None:None 200 [Python-httplib2/0.7.2 (gzip)] text/plain text/html 2013-06-12 02:31:30.141 17737 INFO nova.metadata.wsgi.server [-] 172.16.0.22,10.2.113.12 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 126 time: 61.5591779

metadata-agent.log:

root@server14:/var/log/quantum# cat metadata-agent.log 2013-06-12 03:02:55 ERROR [quantum.agent.metadata.agent] Unexpected error. Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/quantum/agent/metadata/agent.py", line 88, in __call__ return self._proxy_request(instance_id, req) File "/usr/lib/python2.7/dist-packages/quantum/agent/metadata/agent.py", line 138, in _proxy_request body=req.body) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1444, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1196, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1166, in _conn_request response = conn.getresponse() File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse response.begin() File "/usr/lib/python2.7/httplib.py", line 407, in begin version, status, reason = self._read_status() File "/usr/lib/python2.7/httplib.py", line 371, in _read_status raise BadStatusLine(line) BadStatusLine: ''

quantum services:

root@server14:/var/log/quantum# cd /etc/init.d/; for i ... (more)

2013-06-12 05:30:38 -0600 commented answer accessing vm metadata fails with grizzly

Oncontroller, netstat -an | grep 8775 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN. I added enabled_apis, and i am not running any overlapping ips. in dhcp namespace "netstat -an" tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN What does that mean? how to disable quantum-ns-metadata-proxy? logs r empty

2013-06-10 06:51:48 -0600 asked a question accessing vm metadata fails with grizzly

All,

After installing standard grizzly recently, I am seeing this issue: VM unable to get metadata info.

*cloudinit start running: Mon, 10 Jun 2013 10:01:35 +0000. up 4.05 seconds
20130610 10:01:35,726  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [50/120s]: socket timeout [timed out]
20130610 10:02:26,782  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [101/120s]: socket timeout [timed out]
20130610 10:02:44,803  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [119/120s]: socket timeout [timed out]
20130610 10:02:45,806  DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds
no instance data found in start*

From name space, I am able to reach controller and all rules seems to be fine.

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 iptables-save | grep 169.254.169.254
-A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775
root@server14:/etc/init.d# 

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      5519/python     
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path
root@server14:/etc/init.d#

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 ping 10.2.113.12
PING 10.2.113.12 (10.2.113.12) 56(84) bytes of data.
64 bytes from 10.2.113.12: icmp_req=1 ttl=64 time=0.299 ms
64 bytes from 10.2.113.12: icmp_req=2 ttl=64 time=0.064 ms

nova.conf content:

root@server14:/etc/init.d# cat /etc/nova/nova.conf 
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
s3_host=10.2.113.12
ec2_host=10.2.113.12
ec2_dmz_host=10.2.113.12
rabbit_host=10.2.113.12
nova_url=http://10.2.113.12:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.2.113.12/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=10.2.113.12:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.2.113.12:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.2.113.12
vncserver_listen=0.0.0.0

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.2.113.12:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://10.2.113.12:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = helloOpenStack
metadata_host = 10.2.113.12
metadata_listen = 0.0.0.0
#metadata_listen_port = 8775

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder ...
(more)