Ask Your Question

Napo Mokoetle's profile - activity

2015-06-30 05:17:58 -0500 answered a question ERROR: openstack The resource could not be found. (HTTP 404) openstack user list

I experienced the same error while trying to list hypervisors. The odd thing is this configuration all worked fine before my controller crashed! Turned out to be I was authenticating against V2.0 Keystone

root@ts036953:/home/pssuser# openstack hypervisor list
ERROR: openstack The resource could not be found. (HTTP 404) (Request-ID: req-05f42a1d-d556-43fd-9de0-64f6dfa57342)

Then I went and changed my keystone version paramater admin_auth_url within the nova.conf [neutron] section of the file on the controller node FROM

[neutron]
url = http://1y6.1z.145.1x0:9696
auth_strategy = keystone
admin_auth_url = http://196.13.145.180:35357/v2.0

TO:

[neutron] url = http://1y6.1z.145.1x0:9696
auth_strategy = keystone
admin_auth_url = http://1y6.1z.145.1x0:35357/v3

AND ALL WORKED WELL THEREAFTER. I could now list hypervisors. Hope that helps...

2015-06-22 16:04:13 -0500 answered a question instack-install-undercloud fails with glance: error: unrecognized arguments: --public

Make sure you source the admin environment after adding glance version 2 ( export OS_IMAGE_API_VERSION=2 ) to your environment script. e.g.

root@ts036953:/home/pssuser# source admin-openrc.sh

2015-06-22 12:59:19 -0500 answered a question Kilo MySQL mysqld_openstack.cnf on Ubuntu

The order in which configuration files are read is specified in this config file on UBUNTU 1504 /etc/mysql/my.cnf

root@ts03695:/home/pssuser# cat /etc/mysql/my.cnf

The MariaDB configuration file

#

The MariaDB/MySQL tools read configuration files in the following order:

1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,

2. "/etc/mysql/conf.d/*.cnf" to set global options.

3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.

4. "~/.my.cnf" to set user-specific options.

#

If the same option is defined multiple times, the last one will apply.

#

One can use all long options that the program supports.

Run program with --help to get a list of available options and with

--print-defaults to see which it would actually understand and use.

Import all .cnf files from configuration directory

!includedir /etc/mysql/conf.d/ !includedir /etc/mysql/mariadb.conf.d/

2015-05-20 04:43:54 -0500 answered a question nova image-list:ERROR: Unauthorized (HTTP 401)

A restart of nova services on the controller did the trick for me when I got the above discussed error during my upgrade to Kilo.

2014-05-25 16:12:23 -0500 received badge  Famous Question (source)
2014-05-12 12:56:57 -0500 received badge  Notable Question (source)
2014-05-12 01:22:30 -0500 received badge  Popular Question (source)
2014-05-09 04:57:24 -0500 commented question ssh to a VM causes kernel panic on Icehouse Neutron host

No the neutron-l3-agent is only running on the server hosting Neutron. Is it supposed to run on the controller too?

The server hosting Neutron has two NICs, eth0 used for Management Network and External Network, and eth1 used for internal/data network. Below is ifconfig -a output for Neutron...

root@ts036945:/home/pssuser# ifconfig -a br-ex Link encap:Ethernet HWaddr 00:22:64:9f:04:a2 inet addr:196.13.145.184 Bcast:196.13.145.255 Mask:255.255.255.0 inet6 addr: fe80::3827:54ff:fe48:c1f/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:12725 errors:0 dropped:1333 overruns:0 frame:0 TX packets:963 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7266031 (7.2 MB) TX bytes:159325 (159.3 KB)

br-int Link encap:Ethernet HWaddr d6:ea:51:d4:dc:45 inet6 addr ... (more)

2014-05-08 15:12:08 -0500 commented question ssh to a VM causes kernel panic on Icehouse Neutron host

Hi darragh-oreilly,

Thanks for your response. I set the ovs_use_veth=True on Controller, Neutron and the 3 Compute Nodes. And also renamed the /usr/bin/neutron-netns-cleanup but to no avail.

The "Kernel Panic" still occurs when I SSH to a VM from the Controller, or from a VM to the Controller even after the proposed changes to ovs_use_veth and the renaming of /usr/bin/neutron-netms-cleanup. At least I have a work-around for that part.

Some further discoveries I made earlier. The "Kernel Panic" occurs only when I ssh to a VM from the Controller host, or from the VM to the controller host. When I attempt to SSH the VMs from my laptop, Neutron doesn't crash!!! Wonder what's going on there and how I can get to the bottom of it?

Moreover, I can successfully SSH into a cirros instances from my laptop using keys files. When I ... (more)

2014-05-08 11:58:42 -0500 asked a question ssh to a VM causes kernel panic on Icehouse Neutron host

Hi Everyone,

I'm running Openstack ICEHOUSE on Ubuntu Trusty in lab. The solution consists of Keystone, Glance, Neutron and 3 Compute Nodes all running on HP ProLiant(DL360 G5, BIOS P58 08/03/2008) Openstack ICEHOUSE components installed: Controller, Keystone, Glance, Neutron, Compute Keystone and Glance on same host Neutron is on its own host 3 Compute Nodes on their own hosts

Everything seems to be all good after the installation as I can instantiate VMs and they get an internal ip without a problem, I'm able to associate external IPs to the VMs without a problem. I can ping or telnet successfully to and from the VMs. From one VM instance to the other SSH works just fine.

The problem starts when I try to SSH from at external machine to an Openstack hosted VM, or from an Openstack VM to an external machine as the Neutron server gets a Kernal Panic. I've sorts of things that are getting me nowhere fast including upgrading the Kernel. Has any one dealt successfully with a problem similar or does any have an idea I can try out to resolve the problem? I've pasted the syslog output at the time of the Kernel crash from Neutron below

Start Of syslog Trace ============================+
ig-file=/etc/neutron/dhcp_agent.ini >/dev/null 2>&1; fi)
May  8 18:00:01 ts036945 CRON[3449]: (neutron) CMD (if [ -x /usr/bin/neutron-netns-cleanup ] ; then /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini >/dev/null 2>&1; fi)
May  8 18:02:07 ts036945 kernel: [55501.391556] ------------[ cut here ]------------
May  8 18:02:07 ts036945 kernel: [55501.391643] kernel BUG at /build/buildd/linux-3.13.0/net/core/skbuff.c:2903!
May  8 18:02:07 ts036945 kernel: [55501.391755] invalid opcode: 0000 [#1] SMP
May  8 18:02:07 ts036945 kernel: [55501.391828] Modules linked in: xt_nat xt_conntrack xt_REDIRECT xt_tcpudp ip6table_filter ip6_tables iptable_filter iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables openvswitch gre vxlan ip_tunnel libcrc32c radeon ttm drm_kms_helper drm gpio_ich serio_raw lpc_ich hpwdt i2c_algo_bit coretemp kvm_intel kvm hpilo i5000_edac edac_core i5k_amb ipmi_si shpchp mac_hid lp parport hpsa hid_generic usbhid hid bnx2 cciss
May  8 18:02:07 ts036945 kernel: [55501.393060] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.13.0-24-generic #47-Ubuntu
May  8 18:02:07 ts036945 kernel: [55501.393175] Hardware name: HP ProLiant DL360 G5, BIOS P58 08/03/2008
May  8 18:02:07 ts036945 kernel: [55501.393277] task: ffff8802245cc7d0 ti: ffff8802245d4000 task.ti: ffff8802245d4000
May  8 18:02:07 ts036945 kernel: [55501.393389] RIP: 0010:[<ffffffff8160e9ba>]  [<ffffffff8160e9ba>] skb_segment+0x95a/0x980
May  8 18:02:07 ts036945 kernel: [55501.393531] RSP: 0018:ffff88022fac34f8  EFLAGS: 00010206
May  8 18:02:07 ts036945 kernel: [55501.393618] RAX: 0000000000000000 RBX: ffff880221bdaa00 RCX: ffff8800cae7b4f0
May  8 18:02:07 ts036945 kernel: [55501.393715] RDX: 0000000000000050 RSI: ffff8800cae7b400 RDI: ffff8800cae7ae00
May  8 18:02:07 ts036945 kernel: [55501.393814] RBP: ffff88022fac35c0 R08: 0000000000000042 R09: 0000000000000000
May  8 18:02 ...
(more)
2014-05-08 10:54:09 -0500 received badge  Enthusiast
2014-05-06 11:22:26 -0500 commented question Error! Bad return status for module build on kernel: 3.13.0-24-generic (x86_64)

I also have this problem. I've removed the package as suggested above.

apt-get remove openvswitch-datapath-dkms

But the errors are still showing when running "apt-get dist-upgrade" even with the openvswitch-datapath-dkms package removed. I'm able to instantiate VM on my ICEHOUSE deployment but once they're running, using SSH with keypairs hangs after the connection is established to the VM! I wonder if the behavior has anything to do with the error discussed on this thread.

Please help! Thanks.

2014-02-07 10:30:17 -0500 commented answer nova attach-volume - AttributeError: display_name

Thanks dtaylor, your advice worked. The Volume link in the dashboard no longer throws an error when I click on it because I know only have v1 endpoint for the cinder service. But now my problem is the volume-attach command I had executed when I started experiencing the problem is in in "attaching" state and doesn't even change state when I bounce of openstack processes. When I try deleting the volume I was trying to attach it tells me I have no permissions to do so since the volume is on in "available" state. Any idea what I can do to reset this operation successfully?

2014-02-06 13:17:41 -0500 answered a question Cinder volume creating error

I was experiencing the same error. After a week I related that the error message has host=controller in it, which I found puzzling because my Nova Controller uses a different hostname. I searched all the configuration files under /etc to see if any of them define "controller" as a host name but found none. Nonetheless to work-around the problem, I simply added another ip/hostname entry in /etc/hosts for hostname 'controller', and the problem immediately disappeared. Hope that helps.

========================= [Thu Feb 06 21:55:16.481916 2014] [:error] [pid 30060:tid 140292521580288] raise exceptions.ConnectionError(msg) [Thu Feb 06 21:55:16.481923 2014] [:error] [pid 30060:tid 140292521580288] ConnectionError: Unable to establish connection: HTTPConnectionPool(host='controller', port=8776): Max retries exceeded with url: /v1/6eece3e2a2bf4a36a719e2e61adf1f6d/volumes/detail?all_tenants=True (Caused by <class 'socket.gaierror'="">: [Errno -2] Name or service not known)

=========================

Can some one please explain to me what configuration files determines the host='controller' value and port=8776 as reflected by the extract from error.log above. It's seems to me these values just flip at will with varying service denial messages. With my changing anything, the message all of a sudden appears as follows when I click on items within the Horizon web interface...

======================== [Thu Feb 06 22:01:22.267529 2014] [:error] [pid 30060:tid 140292563543808] raise ConnectionError(e) [Thu Feb 06 22:01:22.267535 2014] [:error] [pid 30060:tid 140292563543808] ConnectionError: HTTPConnectionPool(host='196.13.145.180', port=8774): Max retries exceeded with url: /v2/6eece3e2a2bf4a36a719e2e61adf1f6d/servers/detail?limit=21&all_tenants=True (Caused by <class 'socket.error'="">: [Errno 111] Connection refused)

=======================

2014-02-05 03:42:10 -0500 commented question nova attach-volume - AttributeError: display_name

Hi dtaylor, I'm also experiencing this problem with nova-attach and when I look in the nova-api.log on the controller is see an error like this: T**RACE nova.api.openstack AttributeError: display_name "TRACE nova.api.openstack AttributeError: display_name"** After you removed the V2 endpoint for Cinder, what did you replace the V2 endpoint for Cinder with to make volume-attach work?

2014-02-05 03:40:35 -0500 answered a question nova attach-volume - AttributeError: display_name

Hi dtaylor, I'm also experiencing this problem with nova-attach and when I look in the nova-api.log on the controller is see an error like this:

TRACE nova.api.openstack AttributeError: display_name "TRACE nova.api.openstack AttributeError: display_name"

After you removed the V2 endpoint for Cinder, what did you replace the V2 endpoint for Cinder with to make volume-attach work?