Ask Your Question

animaletdesequia's profile - activity

2018-01-18 01:50:51 -0500 received badge  Notable Question (source)
2018-01-18 01:42:30 -0500 received badge  Student (source)
2017-06-19 08:44:30 -0500 received badge  Famous Question (source)
2017-05-16 04:48:26 -0500 received badge  Notable Question (source)
2017-05-09 07:34:18 -0500 received badge  Popular Question (source)
2017-05-02 03:41:54 -0500 asked a question Install Manila plugin on OpenStack Mitaka

We have a working environment with OpenStack Mitaka, installed using Mirantis Fuel 9.2, upgraded from a Fuel9.0 base installation.

So far we've managed to have an stable environment with multi controller multi-compute nodes. We also added some extra functionalities with manual installs (FWaaS, VPNaaS, LBaaSv1).

Months ago we installed the Manila plugin for our fuel and managed to deploy our OpenStack environment successfully and everything was working fine, including the Shared FS component. But since this is a testing environment, when we tried to replicate the process it failed with the message:

task manila-meta failed

And all the environment fails.

I've traced the error to one of the puppet files defining that task, and find out is a simple script that creates the "default_share_type" and the manila flavour. Since the flavour can be created manually (with the nova CLI), I believe the process stops because the manila service fails to start during deployment and so the rest of actions fail.

The environment is defined exactly with the same parameters as some months ago, which makes me thing that some packages, on the default ubuntu repositories or on the Mirantis one have been updated making the manila plugin incompatible.

I've tried an installation from scratch with a fuel 9.0 (not upgraded to 9.2) and the problem persists, so it's probable the problem resides on some Ubuntu installation packages.

The rpm plugin can be found here:

https://www.mirantis.com/software/mos...

And the source code for it:

https://github.com/openstack/fuel-plu...

Installing the plugin from the rpm or compiling it from source doesn't seem to make any difference.

Although the documentation refers to Fuel 9.0 and 9.1 we did have an environment deployed successfully from fuel 9.2, but then a few weeks later we can't replicate the results with any of the three versions, so apparently the plugin is "broken"

I'd like to ask if there's a workaround for this, or if there's another option to install manually all the necessary components to have Manila service in an environment deployed without such plugin from fuel. I've tried to follow the official guide found here with no success:

https://docs.openstack.org/mitaka/ins...

Thanks in advance.

2017-02-28 09:40:54 -0500 answered a question [Openstack][vpn as a service] issue

We had the same problem and ended up using openswan instead of strongswan.

Install openswan which will replace strongswan (it'll uninstall neutron-vpn-agent too):

apt-get install openswan

Reinstall neutron-vpn-agent:

apt-get install neutron-vpn-agent

Configure neutron to use the OpenSwan service provider and plugin driver.

Restart services:

service neutron-vpn-agent restart
service neutron-server restart

Check both provider and agents are running:

source openrc

neutron agent-list

neutron service-provider-list
2016-08-30 06:55:34 -0500 received badge  Popular Question (source)
2016-08-07 08:19:14 -0500 received badge  Famous Question (source)
2016-07-24 01:13:29 -0500 received badge  Famous Question (source)
2016-07-06 10:05:47 -0500 received badge  Famous Question (source)
2016-06-19 18:26:30 -0500 received badge  Notable Question (source)
2016-06-19 18:26:30 -0500 received badge  Popular Question (source)
2016-06-06 10:27:37 -0500 asked a question How to install designate and trove on OpenStack Kilo (Ubuntu 14.04) deployed with Mirantis Fuel 7.0

We've set a basic environment using Mirantis Fuel with 3 controller/storage nodes and 3 computes. We've used Fuel to install the base system and then we added some extra components like FWaaS plugin for neutron.

The problem we're facing now is we can add some other components like designate (DNSaaS) nor trove (DBaaS). Wheater I follow the official guides from git-hub (trying to compile from python sources) or add some extra repos (ubuntu-cloud-archive) I always run into dependency problems.

I think the problem is, some of the packages installed trough fuel are not ubuntu natives but re-compiled and modified by mirantis. For example, if I list some neutron packages:

dpkg -l | grep neutron
ii  neutron-common                      1:2015.1.1-1~u14.04+mos5355            all          Neutron is a virtual network service for Openstack - common
ii  neutron-dhcp-agent                  1:2015.1.1-1~u14.04+mos5355            all          Neutron is a virtual network service for Openstack - DHCP agent
rc  neutron-l3-agent                    1:2015.1.1-1~u14.04+mos5348            all          Neutron is a virtual network service for Openstack - l3 agent
ii  neutron-lbaas-agent                 1:2015.1.1-1~u14.04+mos739             all          Neutron is a virtual network service for Openstack - LBaaS agent
ii  neutron-metadata-agent              1:2015.1.1-1~u14.04+mos5355            all          Neutron is a virtual network service for Openstack - metadata agent
ii  neutron-plugin-ml2                  1:2015.1.1-1~u14.04+mos5355            a

They have a base version and an added mos (Mirantis OpenStack) version.

If I try to install trove this happens:

apt-get install trove-api
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 trove-api : Depends: trove-common but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

This happens in a lot of packages. So my question would be, is ther a way to add extra packages for this specific distribution of MOS? Maybe some Mirantis extra repo?

Thanks

2016-06-06 10:26:16 -0500 received badge  Notable Question (source)
2016-06-03 02:02:32 -0500 answered a question ERROR: Service cinder-backup could not be found

Have you tried:

 cinder service-list

That should tell you if the service is actually running and listening.

2016-05-20 09:24:15 -0500 received badge  Notable Question (source)
2016-05-20 04:59:47 -0500 commented answer OpenStack Heat WebHooks for up/down-scalling point to internal (storage) IP

More information: I changed the URL to use https instead of https and now the manual scaling URL are correct and responding... The problem now is I broke autoscaling.

2016-05-20 03:34:48 -0500 commented answer OpenStack Heat WebHooks for up/down-scalling point to internal (storage) IP

I tried that and the WebHooks now have the correct IP, but they don't work (no change when I send the request). Also, autoscalling now doesn't work. I also tried setting under [heat_api_cfn] bind_host = <publicip> but neither manual nor autoscalling works.

2016-05-20 01:24:54 -0500 received badge  Popular Question (source)
2016-05-17 01:22:30 -0500 received badge  Popular Question (source)
2016-05-12 03:22:21 -0500 received badge  Editor (source)
2016-05-11 07:53:25 -0500 asked a question OpenStack Kilo, Cinder can't create snapshots: lvm volume not found

We have deployed OpenStack Kilo on Ubuntu 14.04 using Mirantis OS fuel 7.0. We have a setup with 3 controller/storage nodes and 3 compute nodes, all connected for high availability through haproxy.

After some problems with the storage (we're using a multi-backend based on Cinder/LVM volumes on the controllers plus a NetApp NFS shared storage) we've managed to get volume creation, expanding, deletion, attachment, detachment etc.

The problem is when we try to create an snapshot of one of the volumes stored in cinder_iscsi backend. We get this, on the controller that's trying to create the snapshot through the web interface:

   root@node-01:~# tail -f /var/log/cinder-all.log | grep -v "cinder-api"
<158>May 11 11:28:17 Nefeles001 cinder-volume 2016-05-11 11:28:17.294 92341 INFO cinder.volume.manager [req-a79a8998-70f7-4b9d-b1d1-68f8a04e5399 2d60663e49a74eca9f0a96dc713154c5 2a1b8d6fd53045dd8acc8b09c292cb9f - - -] snapshot c4d32012-38ca-4ba8-bca2-186d5703620d: creating
<158>May 11 11:28:17 Nefeles001 cinder-volume 2016-05-11 11:28:17.836 92341 INFO cinder.brick.local_dev.lvm [req-a79a8998-70f7-4b9d-b1d1-68f8a04e5399 2d60663e49a74eca9f0a96dc713154c5 2a1b8d6fd53045dd8acc8b09c292cb9f - - -] Logical Volume not found when querying LVM info. (vg_name=cinder, lv_name=volume-3f253a13-7f12-46fb-bec5-df70b80d9d9c
<155>May 11 11:28:17 Nefeles001 cinder-volume 2016-05-11 11:28:17.836 92341 ERROR cinder.brick.local_dev.lvm [req-a79a8998-70f7-4b9d-b1d1-68f8a04e5399 2d60663e49a74eca9f0a96dc713154c5 2a1b8d6fd53045dd8acc8b09c292cb9f - - -] Trying to create snapshot by non-existent LV: volume-3f253a13-7f12-46fb-bec5-df70b80d9d9c
<155>May 11 11:28:17 Nefeles001 cinder-volume 2016-05-11 11:28:17.861 92341 ERROR oslo_messaging.rpc.dispatcher [req-a79a8998-70f7-4b9d-b1d1-68f8a04e5399 2d60663e49a74eca9f0a96dc713154c5 2a1b8d6fd53045dd8acc8b09c292cb9f - - -] Exception during message handling: Volume device not found at volume-3f253a13-7f12-46fb-bec5-df70b80d9d9c.
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 662, in create_snapshot
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     snapshot.save(context)
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2016-05-11 11:28:17.861 92341 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 654, in create_snapshot ...
(more)
2016-05-05 04:47:49 -0500 asked a question OpenStack Heat WebHooks for up/down-scalling point to internal (storage) IP

We're deploying OpenStack Kilo using Mirantis Fuel 7.0 and so far the system seems to be working. We've added the ceilometer component and heat to make it possible for our users to upscale or downscale automatically some LoadBalancer servers we use on our stacks.

The automatic up and downscaling seem to be working well. The only problem is when we go to check the resources in the Orchestration tab, the generated WebHook points to the Management URL (192.168.0.2:8000) instead of pointing to the same string with the public URL or (preferibly) the server name.

What should look something like this:

https://<serverpublicip>:8000/v1/[...]

Looks like this:

https://192.168.0.2:8000/v1/[...]

I've checked the port (8000) and it's open and listening on the public endpoint, so the problem is not with the service but with the component that generates the info. In fact, if I manually copy the address and edit the right IP, it works from outside the environment using a Restful client or simply a web browser.

But we need the generated webhook to automatically use the public URL so our customers can make calls from external applications (not from our OpenStack/horizon installation only) to modify the stack state.

I've checked heat config under /etc/heat/heat.conf and can find some suspicious settings like:

heat_metadata_server_url=http://192.168.0.2:8000 heat_waitcondition_server_url=http://192.168.0.2:8000/v1/waitcondition heat_watch_server_url=http://192.168.0.2:8003 auth_uri = http://192.168.0.2:5000/v2.0 auth_host = 192.168.0.2

Not sure which one of these parameters may be the one it's used to generate the webhook.

I've tryied modifying them using the public IP and the name of the server itself, doesn't seems to make any difference. The webhooks on the user interface still point to the controller internal IP in the management network.

2016-04-28 05:39:36 -0500 asked a question Removed volumes and instances from database are being counted.

We're having some troubles deploying Kilo on a system with 3 controllers and 3 computes, through mirantis fuel 7.0. The problems involve creation and attaching of volumes, specially the ones stored on a NetApp SAN. As a result, I had to delete some stuck volumes and instances through accessing to cinder and nova databases and deleting files from instances, volumes, volumes_admin_metadata, volume_attachment and volume_glance_metadata.

The problem is, the volume count on the "Overview" for the project still counts those disappeared volumes and instances, so I'd like to know what part of the database that information is being read and how to correct it / synchronize it.

Also I't like to know how to remove the phisical LVM corresponding to those volumes, since they still show up when I do an "lsblk" on the controller that was storing them.

Thanks

2016-02-17 07:52:38 -0500 commented answer How to change cpu_allocation_ratio

I removed all the filters and left only AggregateCoreFilter Then on each Aggregate (through horizon) you can include a new metadata with cpu_allocation_ratio and set it to 16.0 (for example). It's the only way it works.

2015-12-21 07:55:14 -0500 answered a question Can't create Snapshots, how to fallback from Swift to Cinder

I found the solution by trial and error, it all comes down to glance-api.conf and I needed to add/modify these settings:

default_store = cinder

stores = glance.store.filesystem.Store,
         glance.store.http.Store,
         glance.store.cinder.Store,
         glance.store.swift.Store,

filesystem_store_datadir = /var/lib/glance/images/
2015-12-18 05:43:37 -0500 received badge  Famous Question (source)
2015-12-18 01:22:04 -0500 received badge  Popular Question (source)
2015-12-18 01:22:04 -0500 received badge  Notable Question (source)
2015-12-18 01:22:03 -0500 received badge  Enthusiast
2015-12-16 23:58:39 -0500 asked a question Can't create Snapshots, how to fallback from Swift to Cinder

I'm managing an installation of OpenStack Juno (deployed with Mirantis 6.0) working with two nodes, one controller and one compute. We're doing some testing and for some reason (our IT team thinks is something related to HAproxy) Swift stopped working.

At the moment, almost everything works but can't create images or snapshots from instances. I can create snapshots from volumes and the other way around. The difference, I think, is the upload of the image, Glance fails to use Cinder to upload the newly created image.

I've been tasket with either repair Swift, or fallback to use Cinder as storage system. The first would be the best sollution but I have no idea how to start.

I'm pretty new to all this and I'm sorry I can't provide more details, I've started working with OpenStack a few weeks ago and still haven't got enough experience to troubleshot this problem myself.

All I could find on the logs are references to "Failed to upload..." like this one in /var/log/glance/api.log

2015-12-16 12:29:47.604 6182 ERROR glance.api.v1.upload_utils [-] Failed to upload image 1856c024-d75a-49e3-a6a9-dc3d7b15e8cc 2015-12-16 12:29:47.604 6182 TRACE glance.api.v1.upload_utils raise NotImplementedError 2015-12-16 12:29:47.604 6182 TRACE glance.api.v1.upload_utils NotImplementedError 2015-12-16 12:32:22.444 6198 ERROR glance.api.v2.image_data [-] Failed to upload image data due to internal error 2015-12-16 12:32:22.444 6198 TRACE glance.api.v2.image_data
self.notifier.error('image.upload', msg) self.notifier.error('image.upload', msg) 2015-12-16 12:39:08.768 6182 ERROR glance.api.v2.image_data [-] Failed to upload image data due to internal error

Thanks!

2015-11-17 04:13:35 -0500 received badge  Supporter (source)