2020-07-21 04:04:28 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2020-07-21 04:04:28 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2020-07-21 04:04:28 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2018-10-17 16:26:48 -0500
| 作最佳回答 | How to determine my "auth url"? Sorry for being such a noob, but how do I determine what I should use as my auth url. I ask because when I execute ... $ nova list
... I get this error message ... ERROR (CommandError): You must provide an auth url via either --os-auth-url or env[OS_AUTH_URL] or specify an auth_system which defines a default url with --os-auth-system or env[OS_AUTH_SYSTEM]
I am using devstack as I am a total noob. Thanks |
2017-04-19 00:25:38 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2017-04-19 00:25:38 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2017-04-10 20:56:56 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2017-04-10 20:56:56 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2017-03-05 10:40:10 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2017-03-05 10:40:10 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2016-12-02 12:11:19 -0500
| 评论回答 | Mitaka nova-manage api_db sync 'error: No sql_connection parameter is established' Looks like nova_api is a new DB schema. I deployed Liberty via the OpenStack community's Puppet modules. I wanted to see what would happen if I simply upgraded the OpenStack community Puppet modules to the Mitaka version and see what happens. I was hoping everything would work just fine :) LOL! |
2016-12-02 10:36:56 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2016-12-01 21:02:12 -0500
| 问了问题 | Mitaka nova-manage api_db sync 'error: No sql_connection parameter is established' I am trying to set up a Mitaka OpenStack cloud. But when I try to execute: # /usr/bin/nova-manage --debug api_db sync
And the I get the error message: error: No sql_connection parameter is established
Yet I am able to access the nova database via mysql command line, using the values I am using for my I have in the /etc/nova/nova.conf: [database]
connection=mysql://nova:nova@svl-os:3306/nova
|
2016-11-18 07:45:45 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2016-09-14 11:23:19 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2016-09-14 11:18:17 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2016-09-14 11:18:17 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2016-08-05 11:40:00 -0500
| 作最佳回答 | keystone-admin-vip/1: SSL handshake failure Hi, I am having a heck of a time trouble shooting a problem I am having with my glance and cinder services on my controller nodes. When I execute ... # source openrc
# openstack-status
... I get this output ... == Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active (disabled on boot)
== neutron services ==
neutron-server: inactive (disabled on boot)
neutron-dhcp-agent: inactive (disabled on boot)
neutron-l3-agent: inactive (disabled on boot)
neutron-metadata-agent: inactive (disabled on boot)
neutron-lbaas-agent: inactive (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
openstack-cinder-backup: active
== Support services ==
mysqld: inactive (disabled on boot)
dbus: active
target: inactive (disabled on boot)
memcached: active
== Keystone users ==
+----------------------------------+------------+---------+--------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+--------------------+
| bx055dx4eb3640x38cx667c6eef82e8d | admin | True | keystone@example.com |
| x7e6f0981e8b4431x74149e0421c3c5e | ceilometer | True | keystone@example.com |
| c519fed3cd1443a18de0b006eab4xb7e | cinder | True | keystone@example.com |
| 4bxd6c54cfe849148e8c1614415x664d | glance | True | keystone@example.com |
| 2dx96c5164074d99916b59b7b6be9658 | heat | True | keystone@example.com |
| 265d50582fdc4x5ex4160040f1e598ab | neutron | True | keystone@example.com |
| 4ccb96df211141d795207eexe78fe55b | nova | True | keystone@example.com |
+----------------------------------+------------+---------+--------------------+
== Glance images ==
Unable to establish connection to http://my-ost-rhel7.example.com:35357/v2.0/tokens
... and on my haproxy server's /var/log/messages I see this ... ... keystone-admin-vip/1: SSL handshake failure
Also, I have noted the http in the response Unable to establish connection to http://my-ost-rhel7.example.com:35357/v2.0/tokens but I have throughly checked my config files on the haproxy and on the controller nodes and we are using https anywhere a protocol is asked for. But who knows maybe I missed one. I have even done this ... # curl -k https://my-ost-rhel7.example.com:35357/v2.0 ; echo
{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": [{"href": "http://my-ost-rhel7.example.com:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}}
... note the links bit of the json comes back with http and not https. Does anyone know what would cause the keystone-admin-vip/1: SSL handshake failure error?
I have googled and asked co-workers and nobody knows what is causing this? Update: Here's the output (shortened for readability) keystone endpoint-list: ----------------------------------------------------+-----------------------------------------------------+-----------------------------------------------------+
publicurl | internalurl | adminurl |
----------------------------------------------------+-----------------------------------------------------+-----------------------------------------------------+
https://my-ost-rhel7.example.com:9292 | https://my-ost-rhel7.example.com:9292 | https://my-ost-rhel7.example.com:9292
https://my-ost-rhel7.example.com:8777 | https://my-ost-rhel7.example.com:8777 | https://my-ost-rhel7.example.com:8777
http://my-ost-rhel7.example.com:8004/v1/%(tenant_id)s | http://my-ost-rhel7.example.com:8004/v1/%(tenant_id)s | http://my-ost-rhel7.example.com:8004/v1/%(tenant_id)s
https://my-ost-cloud-rhel7.example.com/swift/v1 | https://my-ost-cloud-rhel7.example.com/swift/v1 | https://my-ost-cloud-rhel7.example.com/swift/v1
https://my-ost-rhel7.example.com:8774/v2/$(tenant_id)s | https://my-ost-rhel7.example.com:8774/v2/$(tenant_id)s | https://my-ost-rhel7.example.com:8774/v2/$(tenant_id)s
https://my-ost-rhel7.example.com:5000/v2.0 | https://my-ost-rhel7.example.com:5000/v2.0 | https://my-ost-rhel7.example.com:35357/v2.0
https://my-ost-rhel7.example.com:9696 | https://my-ost-rhel7.example.com:9696 | https://my-ost-rhel7.example.com:9696
https://my-ost-rhel7.example.com:8776/v1/$(tenant_id)s | https://my-ost-rhel7.example.com:8776/v1/$(tenant_id)s | https://my-ost-rhel7.example.com:8776/v1/$(tenant_id)s
|
2016-06-13 16:15:42 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2016-06-07 16:55:01 -0500
| 问了问题 | Where to find list of deprecated settings for nova? Here is the scheduler_default_filters setting I am using my /etc/nova/nova.conf. I think some of these settings are obsolete/deprecated in Juno ... scheduler_default_filters = AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,CoreFilter,DifferentHostFilter,SameHostFilter,GroupAntiAffinityFilter,AggregateCoreFilter,AggregateRamFilter,AggregateTypeAffinityFilter
... but I am not sure where to find the definitive listing of current and deprecated nova settings. |
2016-05-13 12:47:18 -0500
| 问了问题 | High load on instance preventing live migration (Juno) Hi, I have run into situations where live migrations never seem to complete or error out. Here is how I have been able to reproduce the problem. Here is the instance I am migrating: [root@osc1-mgmt-001 tmp]# nova show gb72-net-002-org-001
+--------------------------------------+---------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | osc1-net-002.example.com |
| OS-EXT-SRV-ATTR:hypervisor_hostname | osc1-net-002.example.com |
| OS-EXT-SRV-ATTR:instance_name | gb72-net-002-org-001 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | migrating |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-05-12T20:01:23.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2016-05-12T20:00:58Z |
| flavor | gb72_vm (668ca3b4-a7c0-4309-a11e-4fb5377e4180) |
| hostId | 44206a2390a038b0ede2a4375f1239b0cef917149bd5976fcada6781 |
| id | 3b176ee2-fcf3-41a6-b658-361ffd19639e |
| image | CentOS-7-x86_64-GenericCloud (588e035d-2e1e-4720-94c4-8b000bf9d2ef) |
| key_name | nk |
| metadata | {} |
| name | gb72-net-002-org-001 |
| os-extended-volumes:volumes_attached | [{"id": "16afe52c-31b0-4a3a-b718-aa1789df2852"}] |
| public-47 network | 10.29.105.13 |
| security_groups | default |
| status | MIGRATING |
| tenant_id | 9d011b7c8d104af1b887e229cee436d2 |
| updated | 2016-05-13T17:07:48Z |
| user_id | fa8b956c89304124967bb4bcea54124b |
+--------------------------------------+---------------------------------------------------------------------+
The flavor gb72_vm is one I created and looks like this: [root@osc1-mgmt-001 tmp]# nova flavor-show gb72_vm
+----------------------------+--------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 20 |
| extra_specs | {} |
| id | 668ca3b4-a7c0-4309-a11e-4fb5377e4180 |
| name | gb72_vm |
| os-flavor-access:is_public | True |
| ram | 72000 |
| rxtx_factor | 1.0 |
| swap | 16000 |
| vcpus | 8 |
+----------------------------+--------------------------------------+
After I launched the instance I installed stress and I am running stress on the instance like so: [centos@gb72-net-002-org-001 stress-1.0.4]$ stress -c 6 -m 4 --vm-bytes 512M
I am also running top on the instance and this is what that looks like: top - 17:17:02 up 21:15, 1 user, load average: 10.11, 10.08, 10.06
Tasks: 149 total, 12 running, 137 sleeping, 0 stopped, 0 zombie
%Cpu(s): 62.0 us, 38.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 72323392 total, 70503632 free, 1344768 used, 474988 buff/cache
KiB Swap: 16383996 total, 16383996 free, 0 used. 70740048 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10273 centos 20 0 7260 96 0 R 86.7 0.0 1008:21 stress
10276 centos 20 0 7260 96 0 R 84.7 0.0 1008:22 stress
10271 centos 20 0 7260 96 0 R 84.1 0.0 1008:00 stress
10275 centos 20 0 7260 96 0 R 82.1 0.0 1009:28 stress
10270 centos 20 0 531552 218716 176 R 80.7 0.3 1011:42 stress
10272 centos 20 0 531552 142940 176 R 80.4 0.2 1012:40 stress
10269 centos 20 0 7260 96 0 R 78.7 0.0 1008:38 stress
10274 centos 20 0 531552 333404 176 R 73.1 0.5 1012:32 stress
10267 centos 20 0 7260 96 0 R 70.4 0.0 1008:41 stress
10268 centos 20 0 531552 38452 176 R 65.8 0.1 1011:29 stress
1 root 20 0 191352 6652 3908 S 0.0 0.0 0:06.00 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:01.45 ksoftirqd/0
5 root ... (more) |
2016-05-06 11:48:33 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2016-04-28 14:46:06 -0500
| 评论回答 | Where can I find good documentation on how to use Affinity/Anti-Affinity groups? |
2016-04-28 13:43:29 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2016-04-27 11:53:59 -0500
| 问了问题 | Where can I find good documentation on how to use Affinity/Anti-Affinity groups? Where can I find good documentation on how to use Affinity/Anti-Affinity groups? If I go to http://docs.openstack.org/ and search for "Affinity" I get this list of search results: and others. None of these tell me how to actually how to set up Affinity rules for my VMs. Maybe there the answer is there right in front of my face but I don't see it. This happens to me in the grocery store often ;) |
2016-04-27 10:29:41 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2016-04-26 18:11:17 -0500
| 问了问题 | Understanding Affinity and Anti-Affinity Hi, I am new to OpenStack and virtualization in general. I believe that if 2 VMs have "affinity" they will reside on the same compute node/hypervisor and if they have "anti-affinity" they will reside on different compute nodes. With that in mind I did these test with some cirros images. [root@os-mgmt-001 ~]# nova server-group-create gr-anti anti-affinity
+--------------------------------------+---------+--------------------+---------+----------+
| Id | Name | Policies | Members | Metadata |
+--------------------------------------+---------+--------------------+---------+----------+
| cec8e205-1a70-4a80-b28e-0b643c0d824b | gr-anti | [u'anti-affinity'] | [] | {} |
+--------------------------------------+---------+--------------------+---------+----------+
[root@os-mgmt-001 ~]# nova server-group-list
+--------------------------------------+---------+--------------------+---------+----------+
| Id | Name | Policies | Members | Metadata |
+--------------------------------------+---------+--------------------+---------+----------+
| cec8e205-1a70-4a80-b28e-0b643c0d824b | gr-anti | [u'anti-affinity'] | [] | {} |
+--------------------------------------+---------+--------------------+---------+----------+
[root@os-mgmt-001 ~]# nova server-group-get cec8e205-1a70-4a80-b28e-0b643c0d824b
+--------------------------------------+---------+--------------------+---------+----------+
| Id | Name | Policies | Members | Metadata |
+--------------------------------------+---------+--------------------+---------+----------+
| cec8e205-1a70-4a80-b28e-0b643c0d824b | gr-anti | [u'anti-affinity'] | [] | {} |
+--------------------------------------+---------+--------------------+---------+----------+
Then I booted a couple of cirros images like so: nova boot --hint group=cec8e205-1a70-4a80-b28e-0b643c0d824b --nic net-id=$net_id --image $image --flavor 2 test-005
nova boot --hint group=cec8e205-1a70-4a80-b28e-0b643c0d824b --nic net-id=$net_id --image $image --flavor 2 test-500 They end up in the group as I can demonstrate: [root@ost-mgmt-001 ~]# nova server-group-get cec8e205-1a70-4a80-b28e-0b643c0d824b
+--------------------------------------+---------+--------------------+------------------------------------------------------------------------------------+----------+
| Id | Name | Policies | Members | Metadata |
+--------------------------------------+---------+--------------------+------------------------------------------------------------------------------------+----------+
| cec8e205-1a70-4a80-b28e-0b643c0d824b | gr-anti | [u'anti-affinity'] | [u'6ab61386-658e-4bad-9b81-a595e0aeef47', u'0fa01909-77ab-4d9f-b675-3511c5449dad'] | {} |
+--------------------------------------+---------+--------------------+------------------------------------------------------------------------------------+----------+
Then I live migrate the cirros instances to the same compute node like so: nova live-migration test-005 ost-neut-002
nova live-migration test-500 ost-neut-002
And both the nodes stay on the same compute node ... [root@ost-mgmt-001 ~]# nova show test-005 | grep 002
| OS-EXT-SRV-ATTR:host | ost-neut-002 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | ost-neut-002.example.com |
[root@ost-mgmt-001 ~]# nova show test-500 | grep 002
| OS-EXT-SRV-ATTR:host | ost-neut-002 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | ost-neut-002.example.com |
... I was expecting one of the cirros instances would have to move off of ost-neut-002? |
2016-04-21 15:35:59 -0500
| 问了问题 | nova quota-update tenant? Hi, I would like to increase the ram in my quota. I currently have this: # nova quota-show
+-----------------------------+-------+
| Quota | Limit |
+-----------------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| floating_ips | 10 |
| fixed_ips | -1 |
| metadata_items | 128 |
| injected_files | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes | 255 |
| key_pairs | 100 |
| security_groups | 10 |
| security_group_rules | 20 |
| server_groups | 10 |
| server_group_members | 10 |
+-----------------------------+-------+
I would like to increase the ram to 64000 but the nova quota-update command takes a "tenant" as a mandatory argument. I do not know how to list my tenants? Thanks |
2016-02-29 04:56:32 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2016-02-08 01:25:23 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2016-01-10 11:31:03 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2015-12-09 10:15:30 -0500
| 获得奖牌 | ● 热门的问题
(source)
|
2015-12-06 21:50:16 -0500
| 获得奖牌 | ● 著名问题
(source)
|
2015-12-04 12:47:05 -0500
| 评论回答 | Keystone port 5000 used by Python Would echo "manual" > /etc/init/keystone.override work on centos7 or RHEL7 that use systemd instead the good old /etc/init.d ? |
2015-12-03 21:01:13 -0500
| 获得奖牌 | ● 受欢迎的问题
(source)
|
2015-12-03 16:09:39 -0500
| 问了问题 | Why is /usr/bin/openstack domain list ... hanging? Hi, I am trying to set up a controller node using Openstack (Kilo) Puppet modules.
I have some centos7 VM I am testing on. I am running PE 3.7* and I have installed the openstack/keystone Puppet module on my puppet master. Here is my puppet code: class wrapcontroller(
... big long list of paramaters ...
) {
file { $ldaps_cert_filename:
ensure => 'present',
content => $ldaps_cert_content,
}
exec { '/bin/yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm':
unless => '/bin/rpm -q epel-release',
}
exec { '/bin/yum -y install http://rdo.fedorapeople.org/openstack-kilo/rdo-release-kilo.rpm':
unless => '/bin/rpm -q rdo-release',
}
# need sysfsutils for live migration
package { 'sysfsutils':
ensure => 'present',
}
class {'::memcached':
user => 'memcached',
max_connections => '8192',
max_memory => '4096',
tcp_port => '11211',
udp_port => '11211',
listen_ip => '0.0.0.0',
manage_firewall => true,
processorcount => '32',
}
package { 'mariadb':
ensure => present,
}
exec { 'keystone_database_init':
command => '/usr/bin/keystone-manage db_sync',
onlyif => [
"/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'",
"/usr/bin/test -z \"`/usr/bin/mysql -u$keystone_db_user -p$keystone_db_pass -h$db_host -P$db_port $keystone_db_name -e 'show tables'`\""
],
require => Package['mariadb'],
}
class {'::keystone':
debug => $debug,
verbose => $debug,
admin_token => $keystone_auth_token,
database_connection => "mysql://${keystone_db_user}:${keystone_db_pass}@${db_host}:${db_port}/${keystone_db_name}",
token_driver => 'keystone.token.persistence.backends.memcache_pool.Token',
token_provider => 'keystone.token.providers.uuid.Provider',
memcache_servers => split( "${memcached_servers}", ','),
rabbit_hosts => split( "${rabbit_hosts}", ','),
rabbit_password => $rabbit_password,
rabbit_userid => $rabbit_userid,
admin_endpoint => "https://${controller_vip_name}:35357/",
public_endpoint => "https://${controller_vip_name}:5000/",
enable_ssl => true,
}
class { 'keystone::ldap':
url => $ldapurl,
user => $keystone_ldap_user,
password => $keystone_ldap_pass,
suffix => $ldapsuffix,
query_scope => 'one',
user_tree_dn => $user_tree_dn,
user_objectclass => 'person',
user_enabled_attribute => 'userAccountControl',
user_enabled_mask => 2,
user_enabled_default => 512,
page_size => 500,
use_tls => 'True',
user_id_attribute => 'cn',
user_name_attribute => 'cn',
tls_cacertfile => $ldaps_cert_filename,
tls_req_cert => 'never',
user_allow_create => 'False',
user_allow_update => 'False',
user_allow_delete => 'False',
group_allow_create => 'False',
group_allow_update => 'False',
group_allow_delete => 'False',
}
notify { 'line 196 doing class keystone role admin':} ->
class { '::keystone::roles::admin':
admin => $keystone_admin_user,
email => $keystone_admin_email,
password => $keystone_admin_password,
} ->
notify { 'line 202 did class keystone role admin':}
class { '::keystone::endpoint':
public_url => "http://${controller_vip_name}:5000/",
admin_url => "http://${controller_vip_name}:35357/",
}
keystone_user { $nova_admin_user:
ensure => present,
email => $keystone_admin_email,
tenant => $admin_tenant_name,
enabled => true,
password => $nova_admin_password,
}
}
Here is the output from puppet agent -t : # puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: NetworkManager is not running.
Info: Caching catalog for ost-mgmt-centos-001
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Warning: The ignore_default_tenant parameter is deprecated and will be removed in the future.
Warning: The tenant parameter is deprecated and will be removed in the future. Please use keystone_user_role to assign a user to a project.
Info: Applying configuration version '1449179610'
Notice: line 196 doing class keystone role admin
Notice: /Stage[main]/Wrapcontroller/Notify[line 196 doing class keystone role admin]/message: defined 'message' as 'line 196 doing class keystone role admin'
This is where puppet agent -t appears to hang for 10 minutes. The output from puppet agent -t continues with ... Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `each' for nil:NilClass
While puppet is running on my controller node I see this in my ps -ef --forest output. root 5689 ... (more) |
2015-12-01 04:50:01 -0500
| 获得奖牌 | ● 好问题
(source)
|
2015-12-01 04:49:57 -0500
| 作最佳回答 | glance image-list gets HTTP 500 error Hi, I am wondering why I cannot get glance to work. When I do glance --debug image-list I get this output: [root@ost-mgmt-rhel7-001 ~]# glance --debug image-list
curl -i -X GET -H 'User-Agent: python-glanceclient' -H 'Content-Type: application/octet-stream' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'X-Auth-Token: {SHA1}xxxxxxxxxxxx' --cert None --key None https://ost-el7.example.com:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20
Request returned failure status 500.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 602, in main
args.func(client, args)
File "/usr/lib/python2.7/site-packages/glanceclient/v1/shell.py", line 107, in do_image_list
utils.print_list(images, columns)
File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 121, in print_list
for o in objs:
File "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 196, in paginate
images, resp = self._list(url, "images")
File "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 62, in _list
resp, body = self.client.get(url)
File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 253, in get
return self._request('GET', url, **kwargs)
File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 221, in _request
raise exc.from_response(resp, resp.content)
HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
HTTPInternalServerError (HTTP 500)
I see --cert None --key None in the debug output so I was thinking that I need to set these values: ca_file = None (StrOpt) CA certificate file to use to verify connecting clients.
cert_file = None (StrOpt) Certificate file to use when starting API server securely.
key_file = None (StrOpt) Private key file to use when starting API server securely. But I have no idea what to set them too. Here's an Update:
When I execute the curl that is described in the debug output ... curl -i -X GET \
-H 'User-Agent: python-glanceclient' \
-H 'Content-Type: application/octet-stream' \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Accept: */*' \
-H 'X-Auth-Token: {SHA1}f61557e61a6f594ab6613c79a1e092b78fe46835' \
--cert None \
--key None \
'https://ost-el7.example.com:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20' ; echo
... I get this output ... HTTP/1.1 401 Unauthorized
Content-Length: 358
Content-Type: text/html; charset=UTF-8
Www-Authenticate: Keystone uri='https://ost-el7.example.com:5000/v2.0'
Date: Mon, 02 Mar 2015 20:35:05 GMT
<html>
<head>
<title>401 Unauthorized</title>
</head>
<body>
<h1>401 Unauthorized</h1>
This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.<br /><br />
</body>
</html>
Update: Update: Thanks Haneef, I do not thank glance is working. Here's how I checked. [root@ost-mgmt-rhel7-001 ~]# cat test_glance.sh
#!/bin/sh
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=xxxxxxxxxxxx
export OS_AUTH_URL=https://ost-el7.example.com:35357/v2.0
token=`keystone token-get | grep " id " | cut -d \| -f3 | sed -e's/ //g'`
#glance --debug image-list
echo "token is [$token]"
curl -ki -X GET \
-H 'User-Agent: python-glanceclient' \
-H 'Content-Type: application/octet-stream' \
-H 'Accept-Encoding ... (more) |
2015-11-24 16:18:10 -0500
| 问了问题 | OpenStack Keystone Puppet module not working. (Juno) I am trying to use the OpenStack community puppet modules. Here's the keystone module I am using: https://github.com/openstack/puppet-k...
I am using the stable juno branch. I have in my puppet manifest for my controller nodes this resource definition: class { '::keystone::roles::admin':
admin => $keystone_admin_user,
email => $keystone_admin_email,
password => $keystone_admin_password,
} ->
And when puppet runs that code I get this error: Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[keystone_admin_user@openstack]: Could not evaluate: Execution of '/usr/bin/openstack domain show --format shell' returned 2: usage: openstack domain show [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width <integer>] [--prefix PREFIX]
<domain>
openstack domain show: error: too few arguments
Can anyone help me? Are these Puppet modules still being supported? Does anyone use them? Thanks! |
2015-11-17 09:13:52 -0500
| 获得奖牌 | ● 著名问题
(source)
|