Ask Your Question

sauloaugustosilva's profile - activity

2019-01-14 09:57:44 -0500 received badge  Famous Question (source)
2018-05-08 08:16:15 -0500 received badge  Famous Question (source)
2018-05-07 00:44:29 -0500 received badge  Notable Question (source)
2018-05-06 06:37:44 -0500 received badge  Popular Question (source)
2018-05-04 16:52:44 -0500 received badge  Famous Question (source)
2018-05-04 11:25:20 -0500 commented question can not attach cinder volume at running instance

I am using LVM driver with iscsi tgt target . At cinder log there arenĀ“t any ERROR or WARNING . What do you want that I share ?

2018-05-03 13:21:19 -0500 asked a question can not attach cinder volume at running instance

hi ,

I have a OpenSUSE install of openstack that is going with error during the attach of volume in a running instance . I can attach volume if instance state is shutdown . After attached I can start the instance and use the volume . I have another openstack installation that does not as this kind of problem .

Follow the error on nova-api.conf file :

2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [req-454b4f22-69ac-40cc-977a-7b344d0cd057 9cf3c075666d4dba86a922d581b73a08 d5d778441e724c7cb065bbaf01befe5b - 8433db4810f947168950770f8c93a4f2 8433db4810f947168950770f8c93a4f2] [instance: f44b3266-f261-47a2-82cc-7843809904e9] Failed to attach volume at mountpoint: /dev/vdb: libvirtError: internal error: child reported: Unable to set ACLs on /dev/sdc: Invalid argument
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9] Traceback (most recent call last):
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1294, in attach_volume
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     guest.attach_device(conf, persistent=True, live=live)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 308, in attach_device
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     self._domain.attachDeviceFlags(device_xml, flags=flags)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     rv = execute(f, *args, **kwargs)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     six.reraise(c, e, tb)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     rv = meth(*args, **kwargs)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 587, in attachDeviceFlags
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9] libvirtError: internal error: child reported: Unable to set ACLs on /dev/sdc: Invalid argument
2018-05-03 13:51:57.962 3122 ERROR nova.virt.libvirt.driver [instance: f44b3266-f261-47a2-82cc-7843809904e9] 
2018-05-03 13:51:58.352 3122 ERROR nova.virt.block_device [req-454b4f22-69ac-40cc-977a-7b344d0cd057 9cf3c075666d4dba86a922d581b73a08 d5d778441e724c7cb065bbaf01befe5b - 8433db4810f947168950770f8c93a4f2 8433db4810f947168950770f8c93a4f2] [instance: f44b3266-f261-47a2-82cc-7843809904e9] Driver failed to attach ...
(more)
2018-03-08 01:28:59 -0500 received badge  Notable Question (source)
2018-03-07 09:20:51 -0500 received badge  Popular Question (source)
2018-03-07 06:54:16 -0500 answered a question Ceph Backend Poor Perfomance

I solved my problem add another interface and physically split the cluster and ceph network traffics . Now the source of VM is providing the higher outgoing traffic .

2018-03-07 06:52:22 -0500 answered a question Poor ceph block device performance on VM's

I solved my problem doing follow configuration on ceph.conf file

[client] rbd cache = true rbd cache size = 268435456 rbd cache max dirty = 0 rbd cache target dirty = 134217728 rbd cache max dirty age = 2 rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /va/rlog/qemu/qemu-guest-$pid.log

and

public network = 10.10.18.0/24 cluster network = 10.10.19.0/24

at global section .

And the most valuable change was setup one network interface for each ceph node . Now my ceph nodes hace 3 interfaces each .

2018-03-06 06:43:51 -0500 commented question vnc console partially working

Hi , Is the 172.16.10. your management network ?

2018-03-05 09:24:55 -0500 asked a question Ceph Backend Poor Perfomance

Hi all ,

I have experience a poor performance from CEPH after I migrated from Mitaka to Ocata . I did some testing doing same running of bonnie++ and monitoring the network utilization . I have a 8 nodes nova-compute services configured with follow setup :

[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.10.10.XXX
use_neutron = True
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
firewall_driver = nova.virt.firewall.NoopFirewallDriver
notification_topics = notifications,notifications_designate
notification_driver=ceilometer.compute.nova_notifier
compute_driver = libvirt.LibvirtDriver
scheduler_default_filters = AllHostsFilter
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
debug = True
verbose = True
rootwrap_config = /etc/nova/rootwrap.conf
vif_plugging_is_fatal = false
vif_plugging_timeout = 0 
allow_resize_to_same_host = True
#live_migration_uri = kvm+ssh://nova@%s/system
[api_database]
connection = mysql+pymysql://nova:PASSWORD@openstack/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://nova:PASSWORD@openstack/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://10.10.10.101:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://openstack:5000
auth_url = http://openstack:35357
memcached_servers = openstack:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova  
password = PASSWORD
[libvirt]
virt_type = kvm
images_rbd_pool = vms
images_type = rbd
rbd_secret_uuid = c791e4ea-59fe-4f6e-9571-84b38fa800b2
rbd_user = cinder
disk_cachemodes = "network=writeback"
hw_disk_discard = unmap
live_migration_flag= "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
images_rbd_ceph_conf = /etc/ceph/ceph.conf
inject_partition = -1
[matchmaker_redis]
[metrics]
[neutron]
url = http://openstack:9696
auth_url = http://openstack:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = PASSWORD
nova_medata_ip = openstack2
metadata_proxy_shared_secret = PASSWORD
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = openstack
rabbit_userid = openstack
rabbit_password = "PASSWORD"
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.10.10.XXX
novncproxy_base_url = https://openstack:6080/vnc_auto.html
[workarounds]
disable_libvirt_livesnapshot = False
[xenserver]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://openstack:35357/v3
username = placement
password = placementsenha
[oslo_messaging_notifications]
driver = messagingv2

and with ceph.conf

[global]
fsid = 35099003-4a39-XXXc-XXXXX-e55f8670ceb2
mon initial members = ceph01,ceph02,ceph03
mon host = 10.10.18.230:6789, 10.10.18.231:6789,10.10.18.210:6789
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true
public network = 10.10.18.0/24
cluster network = 10.10.19.0/24
max open files = 131072
osd pool default size = 2
osd crush chooseleaf type = 1
osd pool default min size = 2 
osd pool default pg num = 64
osd pool default pgp num = 64
mon clock drift allowed = 30
debug mon = 10

mon_osd_min_down_reporters = 10
osd_heartbeat_grace = 60
osd_mon_report_interval_min = 15
osd_mon_ack_timeout = 90

[mon]
mon data = /var/lib/ceph/mon/ceph-$id
[osd]
fstype = xfs
osd mkfs type = xfs
osd data = /var/lib/ceph/osd/ceph-$id
osd journal = /var/lib/ceph/osd/ceph-$id/journal
osd mount options xfs = rw,noatime,inode64,logbsize=256k
osd op threads = 8

[mds]
mds cache size = 250000

[client]
rbd cache = true
rbd cache size = 268435456 
rbd cache max dirty = 0
rbd cache target dirty = 134217728
rbd cache max dirty age = 2
rbd cache writethrough until flush = true
rbd concurrent management ops = 20
admin ...
(more)
2018-03-05 08:12:25 -0500 commented question Poor ceph block device performance on VM's

I have been facing same problem . I did the test using bonnie++ and I monitor the network . The first assumption that I have is that network limitation , cause it hits 600Mpbs of link utilization ( 1Gbps card ) directly from host and less than 100 Mbps from VM . Did you discover anything else ?

2018-02-19 04:57:56 -0500 received badge  Notable Question (source)
2018-02-10 02:59:26 -0500 received badge  Popular Question (source)
2018-02-08 20:22:47 -0500 answered a question unable retrieve instance list after upgrade from newton to pike .

Problem solved : at nova.services table there was some old services that was not correctly updated to Pike . It is possible identify using :

echo "select * from services where version != 22" | mysql nova

after identify the service run

echo "delete from services where version=<version number>  | mysql nova

do that for all version that is not version 22 .

Problem solved

2018-01-29 16:26:53 -0500 commented question unable retrieve instance list after upgrade from newton to pike .

More about it . Only the old instance and project are showing up problem , so that is a migration resources problem . After create a new project new instances created at this specific tenant are list correctly .

2018-01-29 14:58:36 -0500 commented question unable retrieve instance list after upgrade from newton to pike .

More about it . The unique command that is not working is openstack server list for any tenant except admin . All other commands like nova reboot is working fine .

2018-01-29 11:03:39 -0500 asked a question unable retrieve instance list after upgrade from newton to pike .

Hi,

I have perform a migration from mitaka to newton without implement nova cells and that worked fine . Now I tried to perform the migration from newton to pike and I got this strange error on nova-api.log

That error only occurs when I run the instance list from any tenant except admin .

openstack server list 
+--------------------------------------+------+--------+-------------------------------+------------+-------------+
| ID                                   | Name | Status | Networks                      | Image      | Flavor      |
+--------------------------------------+------+--------+-------------------------------+------------+-------------+
| 6a60dc73-6c93-4c8e-b06e-472c0aa9bedd | 03   | ACTIVE | NET-10.10.12.0-22=10.10.15.0  | Centos-7   | C1.M4.HDD20 |
| 6bff0a14-8633-4c72-890f-c420051e704e | 04   | ACTIVE | NET-10.10.12.0-22=10.10.15.12 | Centos-7   | C1.M4.HDD20 |
| 17c8a11a-7498-433a-8501-ae95d66f2922 | 01   | ERROR  |                               | Debian - 8 | m1.large    |
+--------------------------------------+------+--------+-------------------------------+------------+-------------+

openstack server list  --all-projects
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
<type 'exceptions.TypeError'> (HTTP 500) (Request-ID: req-32428f8a-7d0e-47bb-b765-5865a0f01d68)

the /var/log/nova/nova-api.log is :

2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions Traceback (most recent call last):
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 336, in wrapped
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     return f(*args, **kwargs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 181, in wrapper
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     return func(*args, **kwargs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 181, in wrapper
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     return func(*args, **kwargs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 168, in detail
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     servers = self._get_servers(req, is_detail=True)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 311, in _get_servers
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     sort_keys=sort_keys, sort_dirs=sort_dirs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2415, in get_all
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     sort_dirs=sort_dirs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2512, in _get_instances_by_filters_all_cells
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     **kwargs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2543, in _get_instances_by_filters
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions     expected_attrs=fields, sort_keys=sort_keys, sort_dirs=sort_dirs)
2018-01-29 11:46:32.494 19540 ERROR nova.api.openstack.extensions   File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line ...
(more)
2017-11-20 07:49:47 -0500 received badge  Famous Question (source)
2017-11-20 07:49:47 -0500 received badge  Notable Question (source)
2017-07-06 04:09:35 -0500 received badge  Popular Question (source)
2017-07-05 21:48:46 -0500 received badge  Editor (source)
2017-07-05 20:44:27 -0500 answered a question glance and ceph snapshot not working

Some times snapshots works really fast . Although after 3 or 4 snapshot it got really slow again . So I started to investigating all components and what I did was : 1- Include the following configuration ( what I don't think that was the cause of the fix )

hw_disk_discard = unmap 
live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" 
images_rbd_ceph_conf = /etc/ceph/ceph.conf

2- Include at [client] section of /etc/ceph/ceph.conf file with the follow statement in all compute node :

 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
 log file = /var/log/qemu/qemu-guest-$pid.log
 rbd concurrent management ops = 20

There was also another important fix that I supposed that helped in the solution as fix some unclean pg in my ceph cluster . I do not know why it interfere at the fix because the unclean pgs was in another pool that was not related to the vms or images pools .

After some test with 80 GB instance the image stuck at saving state . Investigating .

2017-07-05 19:39:11 -0500 received badge  Enthusiast
2017-07-01 12:52:54 -0500 received badge  Popular Question (source)
2017-07-01 12:52:54 -0500 received badge  Notable Question (source)
2017-06-30 17:45:57 -0500 asked a question glance and ceph snapshot not working

Hi,

I have setup a configuration with Openstack and CEPH as backend . Everything is working except the instance SNAPSHOT . When we use the Horizon the image get on queued and instances at Image uploading . When we run cli command

nova image-create --show --poll 251ebb41-d57e-43e9-b16a-ea395e5922f2 CLI-SNAP-01

System stuck in this output

Server snapshotting... 25% complete

But I can see that the image was created in ceph

# nova image-list --limit 5
+--------------------------------------+------------------+--------+--------------------------------------+
| ID                                   | Name             | Status | Server                               |
+--------------------------------------+------------------+--------+--------------------------------------+
| 7b2bd798-6237-4c2d-8de6-5c7e3b3348de | CLI-SNAP-01      | SAVING | 251ebb41-d57e-43e9-b16a-ea395e5922f2 |
| 0ef8b698-96be-44a1-b41e-abf973f05a55 | CRM                | ACTIVE | 9b96dd70-38d5-4762-9d18-679db13a4333 |
| d3f18d34-bb00-44e1-85cd-71bfae7ca24a | Centos-7         | ACTIVE |                                      |
| c62b12a0-7d9e-4bea-b9e8-4eb27113cbee | SNAP01           | SAVING | a7ecfc9a-8f60-4940-9f8e-443b71e9499d |
| 476221a7-f331-423e-82af-0b68e8a78347 | SNAP02           | SAVING | a36dee50-8c87-478e-9264-f2bc5e909876 |
+--------------------------------------+------------------+--------+--------------------------------------+


# rbd -p images ls | grep 7b2bd798-6237-4c2d-8de6-5c7e3b3348de
7b2bd798-6237-4c2d-8de6-5c7e3b3348de
# rbd info images/7b2bd798-6237-4c2d-8de6-5c7e3b3348de 
  rbd image '7b2bd798-6237-4c2d-8de6-5c7e3b3348de':
        size 20480 MB in 5120 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.e6fe6576d1f0
        format: 2
        features: layering, striping
        flags:
        stripe unit: 4096 kB
        stripe count: 1

I can create and upload image with a really good speed with no problems . I also tested it with QCOW2 and RAW image with same result . Some snapshots finished after 5 days .

glance-api.conf file

[DEFAULT]
debug = True
verbose = True
log_dir = /var/log/glance
notification_driver = messaging
lock_path = /var/run/glance
show_multiple_locations = True
show_image_direct_url = True
enable_v2_api=True
enable_v1_api=True
enable_v2_registry = True
[database]
connection = mysql+pymysql://glance:XXXXXXX@host/glance
[image_format]
[keystone_authtoken]
auth_uri = http://host:5000
auth_url = http://host:35357
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = XXXXXXX
memcached_servers = host:11211
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host = host
rabbit_userid = openstack
rabbit_password = XXXXXX
[oslo_messaging_notifications]
driver = messagingv2
[oslo_policy]
[paste_deploy]
flavor = keystone
[store_type_location_strategy]
[task]
[taskflow_executor]
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = images
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

and this is my nova.conf file

[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = Y.Y.Y.Y
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
notification_topics = notifications,notifications_designate
compute_driver = libvirt.LibvirtDriver
notification_driver=ceilometer.compute.nova_notifier
allow_resize_to_same_host = True
scheduler_default_filters = AllHostsFilter
debug = True
verbose = True
[api_database]
connection = mysql+pymysql://nova:0p3nM4ST3R@host/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://nova:0p3nM4ST3R@host/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://host:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://host:5000
auth_url = http://host:35357
memcached_servers = host:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = XxXXXXXXX
[libvirt]
virt_type = kvm
images_rbd_pool = vms
images_type = rbd
rbd_secret_uuid = c791e4ea-59fe-4f6e-9571-84b38fa800b2
rbd_user = cinder
disk_cachemodes = "network=writeback"
snapshots_directory = /var/lib/nova/snapshots
[matchmaker_redis]
[metrics]
[neutron]
url = http://host:9696
auth_url = http://host:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = XXXXXXX
service_metadata_proxy = True
metadata_proxy_shared_secret = XXXXXXX
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = openstack
rabbit_userid = openstack
rabbit_password = XXXXXXXX
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = Y.Y.Y.Y
novncproxy_base_url = https://host.com:6080/vnc_auto.html
[workarounds]
[xenserver]
2016-08-24 18:56:34 -0500 received badge  Famous Question (source)
2016-08-03 14:00:44 -0500 received badge  Notable Question (source)
2016-08-03 07:42:18 -0500 received badge  Student (source)
2016-08-02 17:14:26 -0500 asked a question Opensuse openstack-nova-compute package

Hi All ,

I run a fully operational Centos 7 Mitaka Openstack Environment and I would like to include a opensuse based compute node on it . I got a error when I tried connect the opensuse compute server into . I figured out that Centos 7 are running nova version 13.0.0 and Opensuse packages are version 13.1.1 . Could be that the problem ? Is there any repository that I could install the 13.0.0 nova-compute packages ?

Thanks in advanced ,

Saulo Augusto Silva sauloaugustosilva@gmail.com

2016-07-18 23:46:17 -0500 received badge  Popular Question (source)
2016-06-16 21:56:06 -0500 asked a question Is cloudkitty ready for mitaka ?

I tried to setup the cloudkitty at mikata following the guide . I am running Centos 7 as server distro and the cloudkitty service will run in a different server than the controller . After install and configure I tried to run the cloudkitty -api with error .

ERROR cloudkitty MissingAuthPlugin: An auth plugin is required to determine endpoint URL 

Any help regarding this ? 

Follwing my cloudkitty.conf 

[DEFAULT]

verbose = True

log_dir = /var/log/cloudkitty

auth_strategy = keystone

rpc_backend = rabbit



[database]

connection = mysql+pymysql://cloudkitty:PASSWORD@DBSERVER/cloudkitty

[auth]
username = cloudkitty

password = "CLOUDPASSWORD"

tenant = service

region = RegionOne

url = http://CONTROLLER:5000/v2.0


[keystone_authtoken]

auth_uri = http://CONTROLLER:5000

auth_url = http://CONTROLLER:35357

memcached_servers = CONTROLLER:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cloudkitty

password = "CLOUDPASSWORD"


[service_credentials]

os_auth_url = http://CONTROLLER:5000/v2.0

os_username = cloudkitty

os_tenant_name = service

os_password = "CLOUDPASSWORD"

interface = internalURL

region_name = RegionOne


[keystone_fetcher]

username = admin

password = "ADMPASSWORD"

tenant = admin

region = RegionOne

url = http://CONTROLLER:5000/v2.0


[oslo_messaging_rabbit]

rabbit_host = oscon02cv

rabbit_userid = openstack

rabbit_password = cloudMQs3nh@

[ceilometer_collector]

username = cloudkitty

password = "CLOUDPASSWORD" 

tenant = service

region = RegionOne

auth_url = http://CONTROLLER:5000/v2.0