Ask Your Question

arunuke's profile - activity

2015-04-07 14:39:50 -0600 received badge  Famous Question (source)
2015-01-23 00:33:26 -0600 received badge  Famous Question (source)
2014-12-13 22:41:59 -0600 received badge  Popular Question (source)
2014-10-02 03:47:52 -0600 received badge  Famous Question (source)
2014-09-24 11:15:31 -0600 received badge  Famous Question (source)
2014-08-02 14:17:27 -0600 received badge  Popular Question (source)
2014-08-02 14:17:27 -0600 received badge  Notable Question (source)
2014-07-15 20:38:45 -0600 received badge  Notable Question (source)
2014-06-23 12:22:23 -0600 received badge  Popular Question (source)
2014-06-22 08:39:40 -0600 received badge  Notable Question (source)
2014-06-05 07:53:32 -0600 received badge  Notable Question (source)
2014-05-04 23:01:34 -0600 received badge  Popular Question (source)
2014-05-03 00:43:18 -0600 asked a question linux bridges on compute node

I am trying to trace a packet from the source to the destination on a havana environment. Ping between the compute VMs created on two different compute hosts is successful, but I am not able to ping the instances from the network node which has an interface on the same physical L2. I see the packets go up to int-br-eth1, but then it does not make it to the tap device via br-int.

The neutron guide indicates that the tap devices of the VMs connect to a linux bridge (qbr) which then uses a veth pair (qvo and qvb) to connect to the integration bride (br-int). However, when i run ovs-ofctl against br-int, I actually see the veth pair from the physical device's bridge (phy-br-eth1) and all the tap devices themselves. Is there something that I am missing here? Any pointers would be very helpful.

[root@bilbo user]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:000086f03c2fa845
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(int-br-eth1): addr:be:52:ac:3a:23:64
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 4(tapbd91351e-e4): addr:fe:16:3e:21:c8:75
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 5(tap922a2939-a2): addr:fe:16:3e:8c:0c:b6
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 6(tap9c6ca22e-19): addr:fe:16:3e:22:f3:f8
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:6a:0c:32:9d:9e:3f
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

Running brctl on the compute node also does not show anything.

[root@bilbo user]# brctl show
bridge name     bridge id               STP enabled     interfaces
[root@bilbo user]#
2014-04-16 13:11:24 -0600 received badge  Famous Question (source)
2014-04-15 17:08:18 -0600 answered a question Packets not leaving compute host

Issue resolved. The physical interface (eth1) on top of which the bridge (br-eth1) was built was DOWN on the compute and networking hosts.

[test@arwen ~]$ sudo ovs-ofctl show br-eth1
OFPT_FEATURES_REPLY (xid=0x2): dpid:00006ea765782249
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(eth1): addr:52:54:00:d9:b1:7e
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 2(phy-br-eth1): addr:d2:53:6e:7f:f9:2b
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-eth1): addr:6e:a7:65:78:22:49
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

After I modified them, things are looking good.

[test@thorin ~]$ sudo ip link set dev eth1 up
[test@thorin ~]$ sudo ovs-ofctl show br-eth1
OFPT_FEATURES_REPLY (xid=0x2): dpid:00008a33604d754d
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(eth1): addr:52:54:00:eb:72:6b
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(phy-br-eth1): addr:e2:d9:2b:a1:0d:61
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-eth1): addr:8a:33:60:4d:75:4d
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[test@thorin ~]$ sudo tcpdump -i br-eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
15:03:40.488784 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:f0:5a:31 (oui Unknown), length 280
15:03:40.509560 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:f0:5a:31 (oui Unknown), length 292

2 packets captured
2 packets received by filter
0 packets dropped by kernel

Thanks to all the folks who assisted.

2014-04-15 14:05:11 -0600 commented answer Packets not leaving compute host

Thanks gklyne. My environment does not use the flat networks, but vlans.

2014-04-15 14:03:17 -0600 commented question Packets not leaving compute host

Thanks Shankar Ganesh. My iptables rule seem to be fine with no drops listed. I went ahead and disabled iptables and neutron security groups while switching all my services to use the Noop driver. Still no luck.

2014-04-15 01:01:20 -0600 received badge  Student (source)
2014-04-15 00:44:09 -0600 received badge  Notable Question (source)
2014-04-14 16:27:37 -0600 received badge  Popular Question (source)
2014-04-12 00:56:27 -0600 asked a question instance groups on nova

I am running Fedora20/IceHouse/RDO. I have been using the GroupAntiAffinityFilter to implement some affinity/anti-affinity rules. I have GroupAntiAffinityFilter set in nova.conf for my filters and I start my instances with the name of the group as a hint.

nova boot --flavor 1 --image <image-id> --hint group=test tstvm1.

Since the last couple of days, I have been getting failures with the error that "instance group test could not be found". My current version of nova client does not seem to have an option to set up instance groups nor could I find any documentation from the most recent guide. Has anyone hit this issue or been able to workaround this?

2014-04-12 00:55:29 -0600 received badge  Enthusiast
2014-04-11 23:01:50 -0600 asked a question Packets not leaving compute host

I am having an unusual issue with networking behavior in my environment (Fedora 20/Icehouse/RDO). My VMs are booting up, they are being assigned IPs by DHCP (as shown by nova list), but none of the VMs are getting IP addresses assigned. The issue appears to be that no packets are actually leaving the compute host.

Here are some of the steps I took to check.

  1. Login to vm console and manually issue udhcpc (cirros image). I see bootp packets in the tcpdump trace of phy-br-eth1, but I do not see them in the tcpdump of phy-br-eth1 on the network node.

    On local compute node:

    [root@aragorn arunt]# tcpdump -i phy-br-eth1
    tcpdump: WARNING: phy-br-eth1: no IPv4 address assigned
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on phy-br-eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
    
    
    23:57:25.288979 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:26:fa:5d (oui Unknown), length 280
    23:57:28.301814 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:26:fa:5d (oui Unknown), length 280
    23:57:31.308918 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:26:fa:5d (oui Unknown), length 280
    
  2. Manually configure the IP address of the VM on the link. Ping another manually configured VM on the same compute host. Ping works.

  3. Ping another manually configured VM on another compute node, ping does not work. The ICMP packet is seen on phy-br-eth1 of the compute host in question, but not on phy-br-eth1 of the other compute node.

  4. security groups and rules seem to be in place.

    [root@grey utils]# nova secgroup-list-rules default
    +-------------+-----------+---------+------------+--------------+
    | IP Protocol | From Port | To Port | IP Range   | Source Group |
    +-------------+-----------+---------+------------+--------------+
    |             |           |         |            | default      |
    | icmp        | -1        | -1      | 0.0.0.0/24 |              |
    |             |           |         |            | default      |
    | tcp         | 22        | 22      | 0.0.0.0/24 |              |
    +-------------+-----------+---------+------------+--------------+
    

I am using openvswitch and not ml2.

Any inputs are greatly appreciated.

2014-04-10 15:43:18 -0600 commented question VirtualInterfaceCreateException: Virtual Interface creation failed

Same problem here on RDO/Fedora 20/Icehouse. Started seeing this just since yesterday since I had the same automated deployment done two days back without any issues.

2014-04-07 09:35:15 -0600 received badge  Popular Question (source)
2014-04-02 16:56:54 -0600 commented question Use case for multiple subnets on a single neutron network

If a single OpenStack network (created using net-create) hosts two subnets (using subnet-create and the network id), how can we ensure a VM comes up with two ports with one on each subnet? Is the standard rule of thumb to use one network per subnet?

2014-04-02 16:06:00 -0600 received badge  Supporter (source)
2014-04-02 13:48:24 -0600 commented answer nova : host ids and extended server attributes

Is it good deployment practice to have a "admin" level user for each tenant? In my setup, the user has the same privileges as the overall cloud administrator. Should I be modifying the policy.json files to make the tenant-administrator have restricted privileges only enough to show this information?

2014-04-02 13:43:49 -0600 commented answer nova : host ids and extended server attributes

Fixed :). Thanks larks, As you pointed out, I had created a "member" user who had spawned all the VMs. I created an "admin" user for that tenant and I am now able to see the hypervisor_hostname and the instance_name.

2014-04-02 07:46:24 -0600 asked a question nova : host ids and extended server attributes

Hello All,

I am trying to find out the hostname of the compute host which runs a certain compute VM. I see that nova's extended server attributes usually list the hostname, but I don't see that happening in my setup (Fedora20/IceHouse). I see the following extended attributes, but not the hostname.

 | OS-DCF:diskConfig           | MANUAL                     |
 | OS-EXT-AZ:availability_zone | nova                       |
 | OS-EXT-STS:power_state      | 1                          |
 | OS-EXT-STS:task_state       | -                          |
 | OS-EXT-STS:vm_state         | active                     |
 | OS-SRV-USG:launched_at      | 2014-03-28T22:42:43.000000 |
 | OS-SRV-USG:terminated_at    | -

I do see a hostId in the output with a key, but I am unable to match it to any of the "nova host*" command outputs.

  • Is there a way to enable nova to print the hostname as part of the extended server attributes?
  • Is there a way to correlate the hostId in the "nova show <server name="">" to identify the compute host?
2014-02-21 00:44:11 -0600 received badge  Famous Question (source)
2013-12-01 08:44:34 -0600 received badge  Notable Question (source)
2013-12-01 07:23:36 -0600 answered a question neutron client authentication failed

Issue resolved : I enabled debug mode and I found that the tenant name was set to null in the requests that go out. I went back and looked up my nova.conf file and I see that I had used neutron_admin_tenant instead of neutron_admin_tenant_name. compute did not have a valid tenant name and couldn't find the endpoint. I fixed this, restarted compute and now I am able to create instances as expected.

Thanks to Yaguang for all his assistance.

2013-12-01 07:16:30 -0600 answered a question neutron client authentication failed

Just a quick addendum. The ids in the service-list and the endpoint-list do not seem to match since I had re-created the setup after I posted the original message. I verified the values in my new setup and they are matching.

I am able to get information from the neutron server (running on the cloud controller) using admin/admin/admin credentials and also neutron/neutron/service credentials using the neutron client. It fails only when nova tries to talk to the cloud controller.

2013-12-01 01:21:02 -0600 commented question neutron client authentication failed

Issue seems to be resolved now :). I had used neutron_admin_tenant instead of neutron_admin_tenant_name in the nova.conf file (dang). I used the debug feature and I found that the tenant name was not set which had me go back and look into the file again. Thanks a ton Dheeru for all your assistance.

2013-12-01 01:01:03 -0600 commented question neutron client authentication failed

My bad. The mismatch in endpoints in the two outputs is because I had re-created the setup entirely. 1. I verified that the endpoint's service id and the actual service id of neutron match completely. 2. I am able to use the neutron client directly to get the network information from the compute host (using both admin/admin/admin and neutron/neutron/service credentials) 3. I will send you an email with additional details. Thanks again for your assistance.

2013-11-30 21:12:06 -0600 answered a question neutron client authentication failed

Hello Yaguang,

Please find the information listed below. The information was retrieved from the compute host.

[root@viserion utils]# cat keystone_remote.bash source variables.env

export OS_USERNAME=admin export OS_PASSWORD=$ADMINPASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://$OS_CLOUD_CONTROLLER:5000/v2.0

[root@viserion utils]# keystone user-list +----------------------------------+---------+---------+-------+ | id | name | enabled | email | +----------------------------------+---------+---------+-------+ | e53da56b118b47b7a2effcbb84e1fd64 | admin | True | | | 813e6a725f074cd98b2e70821efff038 | cinder | True | | | e41024d64afe4ef5bfe0b23e4ec921cc | demo | True | | | f729ec74d25e421db22b7a8230d015f2 | glance | True | | | 8502991328d54fa78e4fed5123cb3b35 | neutron | True | | | 774b59b1240b4a82844d08b002ccb3f9 | nova | True | | +----------------------------------+---------+---------+-------+ [root@gwr27 viserion]# keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 781670306e6b4c50bc332e511aca54c4 | admin | True | | a280078745bd482081f7d567f0a6f24f | demo | True | | ef7607ce7f4f4e5b8b1c9533ebf9c11b | service | True | +----------------------------------+---------+---------+ [root@gwr27 viserion]# keystone service-list +----------------------------------+----------+----------+-------------+ | id | name | type | description | +----------------------------------+----------+----------+-------------+ | 4efbe45a80544d6595e99c4d679cd00b | cinder | volume | Volume V2 | | 090a3ba5f0c54558837de983aff299b6 | cinder2 | volumev2 | Volume V2 | | fc5a5af784764bf9bc2100103eba0121 | ec2 | ec2 | EC2 | | 97ddca39d0d94bb9841ca7fe7914db58 | glance | image | Image | | 41a1f3a32bf74fafafe65dbb588a8085 | keystone | identity | Identity | | 480efc9c07084aefba05f1b61461584e | neutron | network | Network | | 7e902d2371b8441cb2d0683ee1dc2b99 | nova | compute | Compute | +----------------------------------+----------+----------+-------------+

2013-11-30 15:15:51 -0600 received badge  Editor (source)
2013-11-30 09:02:23 -0600 received badge  Popular Question (source)
2013-11-27 18:19:22 -0600 answered a question neutron client authentication failed

Hello Yaguang/All,

Thanks for taking the time to respond to my request. I had originally tried with neutron_region_name set to RegionOne on just the compute, but after your suggestion, I set it on both the compute and controller nova.conf files and restarted all the servers. Unfortunately, I am still seeing the same issue on the compute side.

My settings on the nova.conf file on the compute side are as follows

[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_hostname = 192.168.122.61 glance_host = 192.168.122.61 my_ip = 192.168.122.133 vncserver_listen = 192.168.122.133 vncserver_proxyclient_address = 192.168.122.133 auth_strategy = keystone auth_host = 192.168.122.61 admin_user = nova admin_tenant_name = service admin_password = nova network_api_class = nova.network.neutronv2.api.API neutron_url = http://192.168.122.61:9696 neutron_auth_strategy = keystone neutron_admin_tenant = service neutron_admin_username = neutron neutron_admin_password = neutron neutron_admin_auth_url = http://192.168.122.61:35357/v2.0 #firewall_driver = nova.virt.firewall.NooPFirewallDriver firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver security_group_api = neutron compute_driver = nova.virt.libvirt.LibvirtDriver neutron_region_name = RegionOne libvirt_type = qemu

I wasn't sure if the firewall driver was creating the issue, so I tried using the NooPFirewallDriver, but that failed too indicating 'model object has no attribute NooPFirewallDriver'. I have the same setting on my controller and all services are running fine.

Incidentally, I noticed the following set of messages on my keystone.log on the controller

2013-11-27 09:39:51.865 2456 WARNING keystone.common.wsgi [-] Could not find project, service. 2013-11-27 09:39:51.910 2456 WARNING keystone.common.wsgi [-] Could not find user, neutron. 2013-11-27 09:54:15.982 2456 WARNING keystone.common.wsgi [-] Could not find project, demo. 2013-11-27 09:54:16.011 2456 WARNING keystone.common.wsgi [-] Could not find user, demo. 2013-11-27 10:10:09.862 2456 WARNING keystone.common.wsgi [-] Could not find service, neutron.

I am guessing there is something else that I need to pay attention to. Any inputs in this regard would be very helpful.

2013-11-27 13:20:47 -0600 asked a question neutron client authentication failed

[Edit1: Added new information below based on Dheeru's request]

[Edit 2: Clarification on Dheeru's second comment]

I am running Havana/Fedora 19 and my nova-compute reports neutron authentication failure. It indicates that the service or the endpoint are not available. I posted a request on the launchpad URL and I got a suggestion to include neutron_region_name in the nova.conf file. I included it on both the controller and the compute nodes and restarted all my services, but I am still running into the same error.

Since I am using RDO packages, I thought I'd post it it on the RDO forum to see if anyone can point me in the right direction. A copy-paste of my question from the launchpad site is attached below.

=========BEGIN COPY/PASTE===========

I am running Havana on a Fedora 19 system and I am running into some issues in nova-compute when booting an instance. I am getting a "neutron client authentication failed: Could not find Service or Region in Service Catalog" error in the nova-compute log file (note 1). However, the endpoints do exist when run from the controller node (note 2). I am also able to use the neutron client to print previously created networks and all other related information (note 3), so I am not quite sure if it's the endpoint that is causing issues. Nova.conf appears to have the right settings to indicate where the neutron is running (note 4) while the api-paste.ini file also the right information (note 5). To verify, I ran keystone commands from the compute host and I am able to see the endpoints (note 6).

It does not appear to be the issue described in https://bugs.launchpad.net/nova/+bug/1241275 since the stack traces are different. From the looks of it, in my case, there is an attempt to get the token and it fails to locate even the endpoint. Any thoughts or suggestions in this regard are greatly appreciated.

Notes:

nova-compute log:

2013-11-26 14:02:56.921 2493 ERROR nova.network.neutronv2 [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Neutron client authentication failed: Could not find Service or Region in Service Catalog.
2013-11-26 14:02:56.922 2493 ERROR nova.compute.manager [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Instance failed network setup after 1 attempt(s)
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager Traceback (most recent call last):
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1228, in _allocate_network_async
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager dhcp_options=dhcp_options)
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapper
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager res = f(self, context, *args, **kwargs)
2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 358, in allocate_for_instance
2013-11-26 14 ...
(more)
2013-11-26 22:54:43 -0600 asked a question neutron client authentication failed

Hello All,

I am running Havana on a Fedora 19 system and I am running into some issues in nova-compute when booting an instance. I am getting a "neutron client authentication failed: Could not find Service or Region in Service Catalog" error in the nova-compute log file (note 1). However, the endpoints do exist when run from the controller node (note 2). I am also able to use the neutron client to print previously created networks and all other related information (note 3), so I am not quite sure if it's the endpoint that is causing issues. Nova.conf appears to have the right settings to indicate where the neutron is running (note 4) while the api-paste.ini file also the right information (note 5). To verify, I ran keystone commands from the compute host and I am able to see the endpoints (note 6).

It does not appear to be the issue described in https://bugs.launchpad.net/nova/+bug/1241275 (https://bugs.launchpad.net/nova/+bug/...) since the stack traces are different. From the looks of it, in my case, there is an attempt to get the token and it fails to locate even the endpoint. Any thoughts or suggestions in this regard are greatly appreciated.

Notes:

  1. nova-compute log

2013-11-26 14:02:56.921 2493 ERROR nova.network.neutronv2 [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Neutron client authentication failed: Could not find Service or Region in Service Catalog. 2013-11-26 14:02:56.922 2493 ERROR nova.compute.manager [req-662e2363-cdcb-495e-96c3-d9852fcfea59 None None] Instance failed network setup after 1 attempt(s) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager Traceback (most recent call last): 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1228, in _allocate_network_async 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager dhcp_options=dhcp_options) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapper 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager res = f(self, context, args, *kwargs) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 358, in allocate_for_instance 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager LOG.exception(msg, port_id) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 323, in allocate_for_instance 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager port_req_body) 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 392, in _populate_neutron_extension_values 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager self._refresh_neutron_extensions_cache() 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 375, in _refresh_neutron_extensions_cache 2013-11-26 14:02:56.922 2493 TRACE nova.compute.manager neutron = neutronv2.get_client ... (more)

2013-07-22 21:42:19 -0600 answered a question OpenStack Compute & Glance Host Information

Problem solved.

the glance_api_servers information was not inside the [Default] section, but underneath the [keystone_authtoken] section. I had my qpid_hostname information within [Default] and that was picked up as expected. Moved the glance_api_servers information into [Default] and things are working fine now.

2013-07-22 21:42:09 -0600 answered a question OpenStack Compute & Glance Host Information

Problem solved.

the glance_api_servers information was not inside the [Default] section, but underneath the [keystone_authtoken] section. I had my qpid_hostname information within [Default] and that was picked up as expected. Moved the glance_api_servers information into [Default] and things are working fine now.

2013-07-22 20:50:17 -0600 answered a question OpenStack Compute & Glance Host Information

Here is some more information illustrating the problem

Cloud Controller

--> Cloud Controller IP : 10.10.10.175

[root@holmes utils]# ip addr 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:35:98:4d brd ff:ff:ff:ff:ff:ff inet 10.10.10.175/24 brd 10.10.10.255 scope global eth0 inet6 fe80::5054:ff:fe35:984d/64 scope link valid_lft forever preferred_lft forever

--> Cloud Controller's nova.conf has the same information

[root@holmes utils]# cat /etc/nova/nova.conf | grep -i glance #Glance glance_api_servers = 10.10.10.175:9292 image_service = nova.image.glance.GlanceImageService

--> Cloud Controller's glance endpoints are looking good

[root@holmes utils]# keystone --os-username admin --os-password admin --os-tenant-name admin --os-auth-url http://10.10.10.175:35357/v2.0 endpoint-list | grep 9292 | d4496273c7fa48d0bdf1715799ce127d | RegionOne | http://10.10.10.175:9292 | http://10.10.10.175:9292 | http://10.10.10.175:9292 | 3582ac906e894484933f3ce9e914d79a |

--> Glance works fine when using appropriate endpoints on Cloud Controller

[root@holmes utils]# glance --os-username admin --os-password admin --os-tenant-name admin --os-auth-url http://10.10.10.175:35357/v2.0 image-list +--------------------------------------+--------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------+-------------+------------------+-----------+--------+ | c765f5dd-22a3-4300-b248-22963a4b808c | Cirros | qcow2 | bare | 13147648 | active | | 2985ea70-43f0-4159-ad3c-313800aa9470 | Ubuntu | qcow2 | bare | 252510208 | active | +--------------------------------------+--------+-------------+------------------+-----------+--------+ [root@holmes utils]#

Compute Host

--> Compute host IP : 10.10.10.152

[root@adler ~]# ip addr 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:c4:25:ca brd ff:ff:ff:ff:ff:ff inet 10.10.10.152/24 brd 10.10.10.255 scope global eth0 inet6 fe80::5054:ff:fec4:25ca/64 scope link valid_lft forever preferred_lft forever

--> Compute host's nova.conf file has the Cloud Controller's IP for glance_image_servers

[root@adler ~]# cat /etc/nova/nova.conf | grep -i glance

Glance

glance_api_servers = 10.10.10.175:9292 image_service = nova.image.glance.GlanceImageService

--> Compute host is able to get image information by using keystone endpoints

[root@adler ~]# nova --os-username admin --os-password admin --os-tenant-name admin --os-auth-url http://10.10.10.175:35357/v2.0 image-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | c765f5dd-22a3-4300-b248-22963a4b808c | Cirros | ACTIVE | | | 2985ea70-43f0-4159-ad3c-313800aa9470 | Ubuntu | ACTIVE | | +--------------------------------------+--------+--------+--------+ [root@adler ~]# nova --os-username admin --os-password admin --os-tenant-name admin --os-auth-url http://10.10.10.175:5000/v2.0 image-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | c765f5dd-22a3-4300-b248-22963a4b808c | Cirros | ACTIVE | | | 2985ea70-43f0-4159-ad3c-313800aa9470 | Ubuntu | ACTIVE | | +--------------------------------------+--------+--------+--------+ [root@adler ~]#

--> Compute host's start up information contains localhost's IP address for glance (from compute.log with Debug and Verbose enabled)

DEBUG nova.service [-] glance_api_servers : ['10.10.10.152:9292'] wait /usr/lib/python2.7/site-packages/nova/service.py:205

--> Compute fails to create a new instance since it is not able to find a glance host at 10.10.10.152 (from compute.log)

| fault | {u'message': u'GlanceConnectionFailed', u'code': 500, u'details': u'Connection to glance host 10.10.10.152:9292 failed: Error communicating with http://10.10.10.152:9292 [Errno 111] ECONNREFUSED |

I believe glance picks up the value from glance.py

glance_opts ... (more)

2013-07-22 03:06:30 -0600 asked a question OpenStack Compute & Glance Host Information

Hello All,

I am bringing up Nova Compute as part of a three-node setup (Cloud controller, Network controller and Compute host) and I find that Nova Compute picks up the localhost's IP as the Glance Host despite glance_api_servers clearly specifying the Cloud Controller's IP in nova.conf.

The cloud controller has keystone, cinder, glance and quantum-server running. When I run nova image-list on the compute host with keystone credentials (username, tenant name, password and auth-url), I am able to get a list of all registered images as expected.

Where does openstack-nova-compute pick up the Glance Host information from? I was under the assumption that it picks it up from the nova.conf file, but it does not appear to be so based on this behavior. I enabled 'debug' and I see that the glance host is being set to the IP address of the local host where compute runs on. I did notice that the glance_api_servers is set to 'localhost:9292' on the Cloud Controller. Thinking that this might be an issue (RPC sends 'localhost:9292' to compute and compute uses that), I modified it, restarted my services and I still run into the same problem.

I did _not_ run into this problem in my earlier attempts and wanted to check if this was a known issue. I could not find a similar thread, but if there is one, it would be very helpful if you could point me to it.

Environment

Fedora 18 on all three nodes Grizzly (sourced from RDO) - openstack-nova-compute-2013.1.2-4.fc19.noarch