Ask Your Question

shorton's profile - activity

2017-06-03 06:00:39 -0500 received badge  Famous Question (source)
2017-05-23 06:00:40 -0500 received badge  Famous Question (source)
2017-03-16 17:59:22 -0500 received badge  Famous Question (source)
2016-11-02 06:30:21 -0500 received badge  Notable Question (source)
2016-11-01 10:21:02 -0500 received badge  Popular Question (source)
2016-10-20 14:20:54 -0500 asked a question Nova backup mechanism - pause needed?

Hello, I want to use Nova backup or snapshot to create backups of some VM instances. I don't want to have to shutdown/reboot those instances. I believe I should be able to pause the instance, run nova backup and then resume the instance, but is Nova doing this already in the background when I run nova backup? Can anyone tell me how Nova work currently in Mitaka to do this? I want to make sure that no corruption happens. Thank you!

2016-10-12 15:56:56 -0500 received badge  Notable Question (source)
2016-10-12 06:54:42 -0500 received badge  Famous Question (source)
2016-10-11 19:31:36 -0500 received badge  Popular Question (source)
2016-10-11 17:45:54 -0500 answered a question nova scheduler unfairly favoring host - filters and weights

Hi Rick, see file contents above in update.

2016-10-11 14:36:14 -0500 asked a question nova scheduler unfairly favoring host - filters and weights

I have a 3 node Openstack cluster (Mitaka on ubuntu 16.04) where my Controller node is also serving at Compute Host #1. My other 2 hosts are compute #2 and compute #3. I am using Ceph (Jewel) distributed cluster pools to back Nova ephemeral, Cinder, Glance, and Manila, and all is working well. I have launched a number of large and small VMs, and 80% of them (18) were provisioned on Host #1. Host 2 and 3 have 4 VMs each. I originally thought that this could be due to ceph--that when I look at the hypervisor summary under System, Host #1 local storage total shows the combined storage for all 3 hosts (42TB). Host 2 and 3 only show their local (real physical) storage (14TB).

To remedy this problem, I added the following to /etc/nova/nova.conf: scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter (which has 'DiskFilter' removed from the default filter list) and I restarted all of the Nova services on the controller. However, when I provision new VMs, they are still getting placed on Host #1. Does anyone have any guidance on what I need to do to better balance the allocation?

Note that my 3 hosts are identical: 32cpus, 256GB RAM, 14TB raid disk. Thank you

Update: Host1:


ceph -s
    cluster 6e647506-631a-457e-a52a-f21a3866a023
     health HEALTH_OK
     monmap e1: 3 mons at {arccloud01=10.155.92.128:6789/0,arccloud02=10.155.92.129:6789/0,arccloud03=10.155.92.130:6789/0}
            election epoch 5152, quorum 0,1,2 arccloud01,arccloud02,arccloud03
      fsmap e1858: 1/1/1 up {0=arccloud01=up:active}
     osdmap e1532: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v1982739: 384 pgs, 6 pools, 661 GB data, 2135 kobjects
            2529 GB used, 39654 GB / 42184 GB avail
                 384 active+clean
  client io 807 kB/s wr, 0 op/s rd, 301 op/s wr


cat /etc/ceph/ceph.conf
[global]
  fsid = 6e647506-631a-457e-a52a-f21a3866a023
  mon_initial_members = arccloud01, arccloud02, arccloud03
  mon_host = 10.155.92.128,10.155.92.129,10.155.92.130
  mon_pg_warn_max_per_osd = 400
  mon_lease = 50
  mon_lease_renew_interval = 30
  mon_lease_ack_timeout = 100
  auth_cluster_required = cephx
  auth_service_required = cephx
  auth_client_required = cephx
  public_network = 10.155.92.0/22
  cluster_network = 192.168.92.0/22
[client.glanceimages]
  keyring = /etc/ceph/ceph.client.glanceimages.keyring
[client.novapool]
  keyring = /etc/ceph/ceph.client.novapool.keyring
[client.cindervolumes]
  keyring = /etc/ceph/ceph.client.cindervolumes.keyring
[client.manila]
  client_mount_uid = 0
  client_mount_gid = 0
  log_file = /opt/stack/logs/ceph-client.manila.log
  admin_socket = /opt/stack/status/stack/ceph-$name.$pid.asok
  keyring = /etc/ceph/ceph.client.manila.keyring
[mon.arccloud01]
  host = arccloud01
  mon addr = 10.155.92.128:6789
[mon.arccloud02]
  host = arccloud02
  mon addr = 10.155.92.129:6789
[mon.arccloud03]
  host = arccloud03
  mon addr = 10.155.92.130:6789
[osd.2]
  host = arccloud01
  public addr = 10.155.92.128
  cluster addr = 192.168.92.128
[osd.1]
  host = arccloud02
  public addr = 10.155.92.129
  cluster addr = 192.168.92.129
[osd.0]
  host = arccloud03
  public addr = 10.155.92.130
  cluster addr = 192.168 ...
(more)
2016-10-05 15:55:22 -0500 received badge  Notable Question (source)
2016-09-29 05:58:34 -0500 received badge  Famous Question (source)
2016-09-28 10:21:52 -0500 answered a question horizon ui plugins for manila and magnum

UPDATE 09.28.16: RESOLVED. The answer is here in Horizon developer docs on how to install your plugin: http://docs.openstack.org/developer/horizon/tutorials/plugin.html#installing-your-plugin (http://docs.openstack.org/developer/h...)

In Ubuntu, the steps required to install Horizon plug-ins are: 1. Rename old plugin if it exists in /usr/lib/python2.7/dist-packages/<manila_ui for="" example=""> 2. git clone download the new version, then copy the inner source directory, such as manila-ui/manila_ui to /usr/lib/python2.7/dist-packages/ 3. Then from inside the source directory, put the enabled/enabler files into place: cp /usr/lib/python2.7/dist-packages/manila_ui/enabled/*.py to /usr/share/openstack-dashboard/openstack_dashboard/enabled 4. restart apache2

2016-09-28 10:07:44 -0500 received badge  Notable Question (source)
2016-09-22 17:33:42 -0500 answered a question Horizon TemplateDoesNotExist usage.html

I have the following files. Can anyone point me at what is missing?

/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/overview/templates/overviewusage.html

/usr/share/openstack-dashboard/openstack_dashboard/dashboards/identity/projects/templates/projects/usage.html

/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/overview/templates/overview/usage.html

UPDATE 092816: The resolution to this problem is here: https://review.openstack.org/gitweb?p=openstack/manila-ui.git;a=commitdiff;h=ffe2f85b308a5b07645fd2c61d173e5ed3c1d075 (https://review.openstack.org/gitweb?p...)

I manually patched in the change to prevent Manila-ui from adding its own template, and it resolved this problem on Ubuntu 16.04 with Mitaka. I only changed the __init__.py file with changes at the end of the file; no other changes from the patch were necessary.

2016-09-15 09:04:50 -0500 answered a question ceph cinder attach volume to running instance

UPDATE: my issue is resolved; it was related to not having sufficient cephx auth set for cinder and nova ceph clients. For others that might encounter this problem, here is my ceph auth list:

arcuser@arccloud01:~$ sudo ceph auth list installed auth entries:

mds.arccloud01 key: xxx caps: [mds] allow caps: [mon] allow profile mds caps: [osd] allow rwx osd.0 key: xxx caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: xxx caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: xxx caps: [mon] allow profile osd caps: [osd] allow * client.admin key: xxx caps: [mds] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: xxx caps: [mon] allow profile bootstrap-mds client.bootstrap-osd key: xxx caps: [mon] allow profile bootstrap-osd client.bootstrap-rgw key: xxx caps: [mon] allow profile bootstrap-rgw client.cephadmin key: xxx caps: [mds] allow rw path=/volumes/_nogroup/1cd9bc07-e883-4e90-8e88-6ee46b4f04de caps: [mon] allow r caps: [osd] allow rw pool=cephfs_data namespace=fsvolumens_1cd9bc07-e883-4e90-8e88-6ee46b4f04de client.cindervolumes key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cindervolumes, allow rwx pool=glanceimages, allow rwx pool=novapool, allow rwx pool=manila client.glanceimages key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=glanceimages client.manila key: xxx caps: [mds] allow * caps: [mon] allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" caps: [osd] allow rw client.novapool key: xxx caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cindervolumes, allow rwx pool=glanceimages, allow rwx pool=novapool, allow rwx pool=manila

Hello jsm, I have a very similar issue and just saw your post... I was about to post this:

Hello, I have working Mitaka deployment on Ubuntu 16.04. I am using Ceph RBD backed Nova Ephemeral storage, Cinder volumes, and Glance Images. Neutron is configured for provider network using linux bridge agent. Everything is working correctly, except I am unable to attach Cinder volume to Nova instance. I have googled several similar bugs, but all of them are old and give inconclusive solutions. Upon attempting nova volume attach, it gives a response, but logs show an internal error, and the attach is failed. The error I am getting:

libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk1'
I should add that this only occurs on a running instance. On a shutdown instance, it works fine. Is this the same as your issue? Do I have something misconfigured? Perhaps we can help each other... From nova-compute.log:

ERROR nova.virt.block_device [req-5c342ce8-fc35-4245-8bff-5731859257c3 996a74a0b9de4473b64cf1d94ff38c9f 47a164e5de59452987ee2fc215169e49 - - -] [instance: fd3620de-6c48-4019-

a0c6-d6bcc084f095] Driver failed to attach volume 009579fd-52b7-46e3-8a51-c09bef28852d at /dev/vdb
 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 279, in attach
     device_type=self['device_type'], encryption=encryption)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1359, in attach_volume
     self._disconnect_volume(connection_info, disk_dev)
   File "/usr/local/lib/python2 ...
(more)
2016-09-12 16:11:16 -0500 received badge  Popular Question (source)
2016-09-12 09:12:42 -0500 received badge  Notable Question (source)
2016-09-09 09:47:54 -0500 received badge  Popular Question (source)
2016-09-08 13:27:56 -0500 asked a question Horizon TemplateDoesNotExist usage.html

I have Mitaka running on Ubuntu 16.04 with a few optional components added. I recently updated packages from launchpad, including horizon, and now I am getting a TemplateDoesNotExist exception on usage.html when I click on Project-Compute-Overview in Horizon. Is anyone else getting this? I can see several usage.html files in the Horizon tree, so not sure what is missing. thanks!

 Internal Server Error: /horizon/project/
 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 164, in get_response
     response = response.render()
   File "/usr/lib/python2.7/dist-packages/django/template/response.py", line 158, in render
     self.content = self.rendered_content
   File "/usr/lib/python2.7/dist-packages/django/template/response.py", line 133,in rendered_content
     template = self._resolve_template(self.template_name)
   File "/usr/lib/python2.7/dist-packages/django/template/response.py", line 88,in _resolve_template
     new_template = self.resolve_template(template)
   File "/usr/lib/python2.7/dist-packages/django/template/response.py", line 80, in resolve_template
     return loader.get_template(template, using=self.using)
   File "/usr/lib/python2.7/dist-packages/django/template/loader.py", line 46, in get_template
     raise TemplateDoesNotExist(template_name)
 TemplateDoesNotExist: usage.html
2016-09-08 10:20:58 -0500 received badge  Editor (source)
2016-09-08 00:25:49 -0500 received badge  Popular Question (source)
2016-09-07 13:51:40 -0500 answered a question Glance on Mitaka - change endpoint for better performance

Thanks bcollins, but I don't have osapi_volume_listen in nova.conf.

I tried modifying glance-registry.conf and glance-api.conf and set bind_host=10.1.10.2 inside each, and restarted glance-registry, glance-api and nova services. However, glance image-list gave an error (could not find the endpoint). So then, I went into mysql; use keystone, select * from endpoint and modified the public, internal, and admin endpoints for glance to be http://10.1.10.2:9292 instead of http://controller:9292 (192.168.2.2).

After that, glance image-list worked, but the Horizon UI gave lots of Oops errors, so I got scared and put it all back. Anyone else have any ideas?

#

Ok, I think I have it working now. Thanks bcollins for the suggestions; they helped. I am on Mitaka on Ubuntu 16.04.

  1. Edit /etc/cinder/cinder.conf change glance_api_servers = http://controller:9292 to: glance_api_servers = http://10GBE_IP:9292

  2. Edit /etc/nova/nova.conf change api_servers = http://controller:9292 to: api_servers = http://10GBE_IP:9292

  3. Need to update the endpoints in the database for everything to work. sudo mysql -u root -p use keystone; select * from endpoint; (here look for the 3 endpoints with port 9292 - these are the admin, internal and public endpoints for glance; we will change all 3) update endpoint set url="http://10GBE_IP:9292" where id="<uuid1>"; update endpoint set url="http://10GBE_IP:9292" where id="<uuid2>"; update endpoint set url="http://10GBE_IP:9292" where id="<uuid3>"; quit;

  4. I rebooted, but I guess you could restart nova, cinder, and apache2 for horizon. After boot-up, I verified in Horizon->Admin->System Information that glance service was listed on my new 10GBE_IP address instead of the 'controller' IP, which was the original 1GBE interface. Checked /var/log/ files for errors, so far, so good. I launched a VM and used 'netstat | grep 9292" on the controller node to verify that the VM image transfer was using the 10GBE.

  5. Your compute nodes need the update 1) and 2) above to Nova and Cinder.

thanks, shorton

2016-09-06 15:32:40 -0500 asked a question Glance on Mitaka - change endpoint for better performance

Hello, I have existing, working Mitaka deployment on servers with 1GbE Ethernet (external network). I have added an additional 10GbE ethernet network plane on each of my hosts for application traffic (internal network). Per the Mitaka/Glance install documentation for Ubuntu, Glance is using the 1GbE External network. Can someone give me detailed list of what needs to be changed to move Glance file transfers (REST operations?) to the 10Gbe Internal network. Which of the endpoints do I need to change: public, admin, internal? I also think I need to update Nova.conf. What else? thank you

2016-08-30 12:19:49 -0500 received badge  Enthusiast
2016-08-22 10:02:58 -0500 asked a question horizon ui plugins for manila and magnum

I need to install Openstack Horizon plugins for Magnum and Manila. Magnum-ui is not available on Launchpad; Manila-ui is available but package is broken and patch fix is available in source (version 2.1.0-2). Both project websites give instructions for setting up Openstack virtual development environments, cloning GIT, and building components from source, which I have done. However, I am unclear on how to deploy these projects into my existing Openstack Mitaka environment once I have built them. Can someone please clarify what my options are for getting these plugins to work?

So, to clarify my question:

I following instructions for building magnum-ui component for Openstack Horizon here: https://github.com/openstack/magnum-ui

My question is, do I then perform: 1. ~/magnum-ui$ sudo cp -R magnum-ui /usr/share/openstack-dashboard/openstack_dashboard/dashboards

  1. ~/magnum-ui$ sudo ln -s /usr/share/openstack-dashboard/openstack_dashboard/dashboards/magnum-ui/magnum_ui/static/dashboard /usr/share/openstack-dashboard/static/magnum-ui

  2. ~/magnum-ui/magnum-ui/magnum_ui/enabled$ sudo cp *.py /usr/share/openstack-dashboard/openstack_dashboard/enabled/

  3. sudo service apache2 restart sudo service memcached restart

NOTE: this procedure does not work, apache gives an error, so there must be missing steps. I cannot FIND ANY GUIDANCE for how to integrate these Openstack Horizon dashboards into my running deployment.

Thank you!