Ask Your Question

Anand TS's profile - activity

2019-01-24 10:30:00 -0500 received badge  Famous Question (source)
2017-10-27 02:10:39 -0500 received badge  Great Answer (source)
2017-10-11 02:41:05 -0500 marked best answer swift user : how to give read only access to containers

Hi all,

I have installed openstack using rdo packstack in a single node and gave swift storage 10 GB. Created project with users having admin access and member role.

I created set of containers via admin user. I need to give users in that project read only access to the containers. In my case when login as users to the dashboard users can't even see the set of containers.

Also I created a pseudo folder in one container in which If I click on that pseudo folder dashboard goes to

"something went wrong"

and If I try to delete it says "

Error: You are not allowed to delete object "

. I don't know what is happening here.

I checked the swift services and all services are runnning fine.

[root@icehouse ~(keystone_admin)]# for i in `ls /etc/init.d/openstack-swift-*`; do $i status; done
openstack-swift-account (pid  11050) is running...
openstack-swift-account-auditor (pid  10395) is running...
openstack-swift-account-reaper (pid  10367) is running...
openstack-swift-account-replicator (pid  11017) is running...
openstack-swift-container (pid  10748) is running...
openstack-swift-container-auditor (pid  10556) is running...
openstack-swift-container-replicator (pid  10517) is running...
openstack-swift-container-updater (pid  11183) is running...
openstack-swift-object (pid  11127) is running...
openstack-swift-object-auditor (pid  10690) is running...
openstack-swift-object-expirer is stopped
openstack-swift-object-replicator (pid  11155) is running...
openstack-swift-object-updater (pid  10720) is running...
openstack-swift-proxy (pid  10935) is running...

where to check the logs of swift? I can see only swift-startup.log in my setup.

If someone can shed some light here I would really appreciate that.

2017-10-11 02:41:05 -0500 received badge  Self-Learner (source)
2017-03-05 13:58:11 -0500 received badge  Notable Question (source)
2017-01-17 09:40:33 -0500 received badge  Popular Question (source)
2017-01-11 23:26:41 -0500 commented question Unable to spin up VMs in some compute nodes

Hello @Bernd,

There was some issue with the neutron plugin configuration. When I changed the unusual entry in the /etc/neutron/plugins/ml2/ml2_conf.ini and changed machanisim_drivers to openvswitch and removed other entry, restarted neutron-server, it started working. mechanism_drivers = openvswitc

2017-01-10 22:11:34 -0500 asked a question Unable to spin up VMs in some compute nodes

Enviroment: Openstack liberty is up and running with one controller and 3 compute nodes.

Scenario: Unable to spin the VM's in 2 compute nodes if I choose the availability zone as this nodes. But for one compute node I am able to spin up VMs with out any issue.

I compared all the configuration files in these compute nodes and it is same as the working one.

ERROR:

Error:Failed to perform requested operation on instance "123", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.].

From compute logs:

2017-01-10 23:01:04.269 34160 ERROR nova.network.neutronv2.api [req-f775168b-2ef9-4c14-81d7-eabef628b689 54140a2bd29e41b78a3e9145268d4f4f a9210aa2a7d14064b5a819c700ddc4ec - - -] Unable to clear device ID for port 'None'
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api Traceback (most recent call last):
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 412, in _unbind_ports
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     port_client.update_port(port_id, port_req_body)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 202, in wrapper
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in with_params
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     ret = self.function(instance, *args, **kwargs)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 549, in update_port
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     return self.put(self.port_path % (port), body=body)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 202, in wrapper
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 302, in put
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     headers=headers, params=params)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 202, in wrapper
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     ret = obj(*args, **kwargs)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in retry_request
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api     headers=headers, params=params)
2017-01-10 23:01:04.269 34160 TRACE nova.network.neutronv2.api   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 202, in wrapper
2017-01-10 23:01 ...
(more)
2017-01-05 22:54:13 -0500 answered a question cannot login to instance?

I have seen that you have set virt_type=qemu in your nova configuration file.

Please run below command and verify the output:

egrep -c '(vmx|svm)' /proc/cpuinfo

If the output is

  1. 0 then "virt_type = qemu" should be set in the nova.conf configuration of the compute node.
  2. Any number higher then "virt_type = kvm" should be set in the nova.conf configuration of the compute node.

Please try this out and let me know this helps.

Thanks, Anand TS

2016-03-09 22:36:03 -0500 received badge  Good Answer (source)
2016-02-02 10:38:21 -0500 received badge  Nice Answer (source)
2015-11-14 19:17:15 -0500 commented question ssh not connecting to Ubuntu Server 14.04 instance

have you added security group rules to that ubuntu instance. ICMP and SSH rules.

2015-11-13 03:50:59 -0500 commented answer Can we migrate vm from vmware to openstack?

@jayarathne yes. use vmdk file

2015-09-24 04:29:47 -0500 commented question Static IP address

Yes, You need a static IP for installing RDO. all the services are installed based on the IP you are giving. So if we change the controller IP. You couldn't even access to horizon or any other openstack related services.

2015-09-24 04:24:58 -0500 answered a question Error during rdo installation of icehouse in centOS
I have installed icehouse on CentOS 7 without any issues.

you have to set the correct path in ~/etc/yum.repos.d/rdo-release.repo

baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/

Icehouse in centOS 6.X will fail due to some depenency issues.

Regards, Anand T S

2015-09-24 04:22:11 -0500 received badge  Famous Question (source)
2015-09-09 23:45:06 -0500 commented question Error during rdo installation of icehouse in centOS

@johanno. I will try with CentOS 7 then. @omar-munoz. Yes I want to use icehouse for some reason.

2015-09-05 22:41:47 -0500 received badge  Notable Question (source)
2015-09-04 13:36:15 -0500 received badge  Popular Question (source)
2015-09-04 04:35:52 -0500 asked a question Error during rdo installation of icehouse in centOS

Hello all,

When I tried to install Openstack rdo (Icehouse) in centos 6.7, I am getting error as shown below,

ERROR : Error appeared during Puppet run: 10.127.194.130_keystone.pp
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[_member_]: Could not evaluate: Expected 2 columns for role row, found 0. Line +----------------------------------+----------+
You will find full trace in log /var/tmp/packstack/20150904-093125-E75vUa/manifests/10.127.194.130_keystone.pp.log
Please check log file /var/tmp/packstack/20150904-093125-E75vUa/openstack-setup.log for more information

When I searched further about this error, I came to know that it is a bug ( https://bugzilla.redhat.com/show_bug.... ) and it is closed state as it is fixed in next release JUNO.

But my question here is there any workaround for this issue so that I can install icehouse version of openstack and not juno.

From other bug reported (https://bugzilla.redhat.com/show_bug....) for the same issue Pádraig Brady told to install python-backports as a work around. In my case it is installed.

So if anyone aware of this issue or know how to run previous version of openstack (icehouse) in my centos box, please answer.

Thanks & Regards, Anand

2015-04-28 12:04:14 -0500 marked best answer Launch windows instance in openstack

Hi all

I have installed openstack in a physical server running CentOS 6.4. Launched ubuntu and other linux instances successfully. But problem arised when I tried to launch windows instance.

These are the steps I have done ,

  1. Used virtual machine manager to create a VM with CentOS 6.4 (i.e. with VirtIO disk and network) and the Windows7 iso after customisation and attached the iso with virtio drivers.

2.After the windows7 installation is done, installed the network drivers as well. by doing so I am able to connect with internet and Windows was workng fine.

3.Installed cloudbase's cloud init sofware and windows update performed.

4.from C: windows-system32-sysprep folder- Sysprep - Select System Out of Box Experience (OOBE) - Generalize - Shutdown Option - Shutdown .

5.From horizon- images and snapshots, browse the windows7 image from /var/lib/libvirt/images/windows7.img and image format i tried converting this image with raw,qcow2 and iso and uploaded.

6.After uploading the image to glance When I tried to launch windows,it is boots to a screen showing launch startup repair (recommeneded) . What could be the problem if windows os is not showing up on startup repair??

Is this an issue with packstack intallation of openstack??

But it is working perfectly with virt-manager, What extra i should add to launch an instance in openstack.

Anybody who is successful to launch windows instance, give your valuable suggestions.

Thanks & Regards, Anand TS

2015-03-13 09:31:04 -0500 received badge  Great Question (source)
2014-12-04 11:43:28 -0500 marked best answer Management tool for openstack

Hi all,

Is there any vcentre like tool in openstack??

I have deployed multiple openstack clouds in datacenters located in different geographical locations.

I need a management tool to handle seperate openstack private clouds in a single platform. Kindly reply if any one finds like one..?

Update:

I have heard about management platforms like Scalr and Fuel. I don't know how this is going to help in openstack deployements.

Anyone who are running openstack production environment.how management part is done?

2014-09-25 04:06:03 -0500 commented answer In Icehouse Access

Hello , are you are able to see dashboard like this while logging as admin. http://imgur.com/VBxwE3x . Also how about creating a new tenant and user and login as non-admin.

2014-09-25 02:09:53 -0500 answered a question In Icehouse Access

This is common for all . if you want to see Access and Security , go to Project tab . To launch Instance using dashboard , go to Compute tab under Project and Click on either instances or images and Click launch.

2014-09-16 03:55:59 -0500 edited question openstack glance db sync issue

hi i am installing Icehouse on centos 6.5 and mysql DB and during glance database sync i receive this error, i am following openstack installation guide.

su -s /bin/sh -c "glance-manage db_sync" glance
Traceback (most recent call last):
  File "/usr/bin/glance-manage", line 6, in <module>
    from glance.cmd.manage import main
  File "/usr/lib/python2.6/site-packages/glance/cmd/manage.py", line 45, in <module>
    from glance.db import migration as db_migration
  File "/usr/lib/python2.6/site-packages/glance/db/__init__.py", line 21, in <module>
    from glance.common import crypt
  File "/usr/lib/python2.6/site-packages/glance/common/crypt.py", line 23, in <module>
    from Crypto.Cipher import AES
  File "/usr/lib64/python2.6/site-packages/Crypto/Cipher/AES.py", line 50, in <module>
    from Crypto.Cipher import _AES
ImportError: /usr/lib64/python2.6/site-packages/Crypto/Cipher/_AES.so: undefined symbol: rpl_malloc
2014-09-16 03:37:06 -0500 edited question conflict occurred attempting to store project

Hi all,

I got some error when configuring keystone, after I executed :

ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data

The error messages are below:

Conflict occurred attempting to store project. (1062, "Duplicate entry 'default-demo' for key 'domain_id'") (HTTP 409)
Conflict occurred attempting to store user. (1062, "Duplicate entry 'default-admin' for key 'domain_id'") (HTTP 409)
Conflict occurred attempting to store role. (1062, "Duplicate entry 'admin' for key 'name'") (HTTP 409)
usage: keystone user-role-add --user <user> --role <role> [--tenant <tenant>]
keystone user-role-add: error: argument --user/--user-id/--user_id: expected one argument
Conflict occurred attempting to store project. (1062, "Duplicate entry 'default-service' for key 'domain_id'") (HTTP 409)
Conflict occurred attempting to store user. (1062, "Duplicate entry 'default-glance' for key 'domain_id'") (HTTP 409)
usage: keystone user-role-add --user <user> --role <role> [--tenant <tenant>]
keystone user-role-add: error: argument --user/--user-id/--user_id: expected one argument
usage: keystone user-create --name <user-name> [--tenant <tenant>]
                            [--pass [<pass>]] [--email <email>]
                            [--enabled <true|false>]
keystone user-create: error: argument --tenant/--tenant-id: expected one argument
usage: keystone user-role-add --user <user> --role <role> [--tenant <tenant>]
keystone user-role-add: error: argument --user/--user-id/--user_id: expected one argument
usage: keystone user-create --name <user-name> [--tenant <tenant>]
                            [--pass [<pass>]] [--email <email>]
                            [--enabled <true|false>]
keystone user-create: error: argument --tenant/--tenant-id: expected one argument
usage: keystone user-role-add --user <user> --role <role> [--tenant <tenant>]
keystone user-role-add: error: argument --user/--user-id/--user_id: expected one argument
usage: keystone user-create --name <user-name> [--tenant <tenant>]
                            [--pass [<pass>]] [--email <email>]
                            [--enabled <true|false>]
keystone user-create: error: argument --tenant/--tenant-id: expected one argument
usage: keystone user-role-add --user <user> --role <role> [--tenant <tenant>]
keystone user-role-add: error: argument --user/--user-id/--user_id: expected one argument
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+---------------------------------------+
|   Property  |                 Value                 |
+-------------+---------------------------------------+
|   adminurl  |  http://localhost:$(admin_port)s/v2.0 |
|      id     |    0be702858f8a4dcfa6ba9b10e93ec15a   |
| internalurl | http://localhost:$(public_port)s/v2.0 |
|  publicurl  | http://localhost:$(public_port)s/v2.0 |
|    region   |               RegionOne               |
|  service_id |    0ce374f2d00b4ecda148e220191a3c95   |
+-------------+---------------------------------------+
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+------------------------------------------------------+
|   Property  |                        Value                         |
+-------------+------------------------------------------------------+
|   adminurl  | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s |
|      id     |           8f1acb964c8d476595ac1b890711c3fb           |
| internalurl | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s |
|  publicurl  | http://localhost:$(compute_port)s/v1.1/$(tenant_id)s |
|    region   |                      RegionOne                       |
|  service_id |           e25bcf6cbe404f8e98256d22f6d1db97           |
+-------------+------------------------------------------------------+
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+----------------------------------------+
|   Property  |                 Value                  |
+-------------+----------------------------------------+
|   adminurl  | http://localhost:8776/v1/$(tenant_id)s |
|      id     |    1dffb2553fd44f738d18d190eddc1d0c    |
| internalurl | http://localhost:8776/v1/$(tenant_id)s |
|  publicurl  | http://localhost:8776/v1/$(tenant_id)s |
|    region   |               RegionOne                |
|  service_id |    9dce6a4901ef4df9b046245e130bf9dd    |
+-------------+----------------------------------------+
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://localhost:9292       |
|      id     | d8e682088d2d47eea8ad5341baa96c4c |
| internalurl |      http://localhost:9292       |
|  publicurl  |      http://localhost:9292       |
|    region   |            RegionOne             |
|  service_id | 8d6a0ef419734ea59d1ac39a4cc8198e |
+-------------+----------------------------------+
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|   adminurl  | http://localhost:8773/services/Admin |
|      id     |   981c7de4aa3d43d0b6aeb96fd9584d80   |
| internalurl | http://localhost:8773/services/Cloud |
|  publicurl  | http://localhost:8773/services/Cloud |
|    region   |              RegionOne               |
|  service_id |   0af5d38012164ea1a7c7f721ff6b4790   |
+-------------+--------------------------------------+
WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
+-------------+---------------------------------------------+
|   Property  |                    Value                    |
+-------------+---------------------------------------------+
|   adminurl  |           http://localhost:8888/v1          |
|      id     |       70693629f7c84da09a1c78cfd6520c90      |
| internalurl | http://localhost:8888/v1/AUTH_$(tenant_id)s |
|  publicurl  | http://localhost:8888/v1/AUTH_$(tenant_id)s |
|    region   |                  RegionOne                  |
|  service_id ...
(more)
2014-09-15 07:34:50 -0500 answered a question Getting 'HTTP 500' while creating image in Glance
2014-09-15 01:05:10 -0500 answered a question Is there any command to check the health of openstack setup and the status of its services?

Hello,

You can use command $ openstack-status if you are running OpenStack on RHEL-based Linux distributions.

2014-09-04 08:06:06 -0500 answered a question Migrate Vmware windows instance to openstack
Hello, 

    Steps to migrate Vmware windows instance to openstack 

    1. Download latest Virtio drivers from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/

    2. Power off the windows vm and add virtio-win-0.1-81.iso to the CDROM and power-on the machine

    3. Install virtio drivers ( Redhat Virtio SCSI Controller and Redhat virtio ethernet adapter )

    4. Copy the VMDk file and upload to glance. Then launch the instance.
2014-09-04 04:02:29 -0500 edited question please help me someone to configue Horizon(OpenStack Havana) in Centos6.5

Every time I tried to setup, it shows follwing error.

Error: Package: openstack-nova-volume-2012.2.4-1.el6.noarch (@epel)
           Requires: openstack-nova-common = 2012.2.4-1.el6
           Removing: openstack-nova-common-2012.2.4-1.el6.noarch (@epel)
               openstack-nova-common = 2012.2.4-1.el6
           Updated By: openstack-nova-common-2014.1.2-1.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1.2-1.el6
           Available: openstack-nova-common-2013.2-0.19.b3.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-0.19.b3.el6
           Available: openstack-nova-common-2013.2-0.23.rc1.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-0.23.rc1.el6
           Available: openstack-nova-common-2013.2-0.25.rc1.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-0.25.rc1.el6
           Available: openstack-nova-common-2013.2-0.26.rc2.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-0.26.rc2.el6
           Available: openstack-nova-common-2013.2-2.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-2.el6
           Available: openstack-nova-common-2013.2-3.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-3.el6
           Available: openstack-nova-common-2013.2-5.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2-5.el6
           Available: openstack-nova-common-2013.2.1-1.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2.1-1.el6
           Available: openstack-nova-common-2013.2.1-3.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2.1-3.el6
           Available: openstack-nova-common-2013.2.2-1.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2.2-1.el6
           Available: openstack-nova-common-2013.2.3-1.el6.noarch (openstack-havana)
               openstack-nova-common = 2013.2.3-1.el6
           Available: openstack-nova-common-2014.1-0.5.b1.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-0.5.b1.el6
           Available: openstack-nova-common-2014.1-0.9.b2.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-0.9.b2.el6
           Available: openstack-nova-common-2014.1-0.11.b3.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-0.11.b3.el6
           Available: openstack-nova-common-2014.1-0.12.rc1.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-0.12.rc1.el6
           Available: openstack-nova-common-2014.1-0.13.rc2.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-0.13.rc2.el6
           Available: openstack-nova-common-2014.1-1.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-1.el6
           Available: openstack-nova-common-2014.1-2.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-2.el6
           Available: openstack-nova-common-2014.1-4.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1-4.el6
           Available: openstack-nova-common-2014.1.1-1.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1.1-1.el6
           Available: openstack-nova-common-2014.1.1-2.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1.1-2.el6
           Available: openstack-nova-common-2014.1.1-3.el6.noarch (openstack-icehouse)
               openstack-nova-common = 2014.1.1-3.el6
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
2014-08-26 03:55:03 -0500 commented question "Host key verification failed" error while resizing instance

Something related to issue mentioned here?

2014-08-25 02:37:12 -0500 answered a question How to create network using nova-network ?

Hello,

Yes you can create a network that virtual machines can use. Do this once for the entire installation and not on each compute node. Run the nova network-create command on the controller .

You can refer this link for how to create floating IP's to access these instances from other machines in your network

2014-08-25 00:16:30 -0500 answered a question openstack-nova-api dead but pid file exists

Hello, Check that your openstack-nova-metadata-api is running, If that service is running , try to stop openstack-nova-metadata-api service and start openstack-nova-api service.

I think by default packstack put entry like this in your /etc/nova/nova.conf file

enabled_apis=ec2,osapi_compute,metadata

If there is metadata entry is there in enabled apis, no need to start openstack-nova-metadata-api . Hope it helps.

2014-08-18 23:40:14 -0500 answered a question CentOS 6.5 with packstack ? odds of working ?

Hello,

This error cause because you have run packstack command multiple times. Here your keystone service is already running . Try to stop your keystone service and also check the packstack answer file generated that matches the existing keystone setup and rerun packstack using that answer file.

2014-08-12 01:39:02 -0500 edited question LVM with another driver

I set up block service (Cinder) on Ubuntu machine by following the OpenStack guide for it ( http://docs.openstack.org/icehouse/in... ). As part of cinder-volume package installation, it asked to install lvm and set up an additional disk for its use.

apt-get install lvm2

pvcreate /dev/sdb

vgcreate cinder-volumes /dev/sdb

apt-get install cinder-volume

.

And then I installed another driver which went through fine. Cinder.conf file is populated correctly with volume backend name, enabled_backends, etc. I created a volume type and linked it to this driver via volume_backend_name. When I run volume create command by specifying volume_type id, cinder-api and cinder-volume logs shows that the volume has been created successfully and volume status is available. But cinder-scheduler log shows that ''the volume service is down or disabled. (host: HOSTNAME)''. Also there are no log lines in cinder-volume log indicating that it is contacting the backend array to create the volume and the volume is not created on the array as suspected.

cinder service-list
+------------------+-----------------------------------------------------------------+------+---------+-------+------------------------+
|      Binary      |                        Host                              | Zone |  Status | State |         Updated_at         |
+------------------+-----------------------------------------------------------------+------+---------+-------+------------------------+

| cinder-scheduler |                HOSTNAME                     | nova | enabled |   up  | 2014-08-07T07:17:06.000000 |

|  cinder-volume   |                HOSTNAME                     | nova | enabled |  down | 2014-08-07T06:54:47.000000 |

|  cinder-volume   |                HOSTNAME@my_driver    | nova | enabled |   up  | 2014-08-07T07:17:01.000000 |

From above command, I observed that service is down. But I don’t understand why it is picking that cinder-volume service instead of selecting the driver one's. I manually disabled it so that scheduler might pick the second one.

cinder service-list
+------------------+-----------------------------------------------------------------+------+---------+-------+------------------------+
|      Binary      |                        Host                              | Zone |  Status | State |         Updated_at         |
+------------------+-----------------------------------------------------------------+------+---------+-------+------------------------+

| cinder-scheduler |                HOSTNAME                     | nova | enabled |   up  | 2014-08-07T07:17:06.000000 |

|  cinder-volume   |                HOSTNAME                     | nova | disabled |  down | 2014-08-07T06:54:47.000000 |

|  cinder-volume   |                HOSTNAME@my_driver    | nova | enabled |   up  | 2014-08-07T07:17:01.000000 |

But this time, I don’t see any error at all in scheduler log and the volume status is available. It does not look like it is using my driver to create volume even though I have specified the correct volume_type id. So, it is not selecting the proper volume service.

My question is :

Do we actually see two cinder-volume services in this case ?

Why cinder-scheduler is not picking the correct one even when specified with volume_type id.?

Please help me resolve this!

.

cinder.conf file :

[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = password

enabled_backends = my_driver


[database]

connection = mysql://cinder:CINDER_DBPASS@controller/cinder


[keystone_authtoken]

auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password


[my_driver]

volume_driver = cinder.volume.drivers.my_driver
volume_backend_name = VOLUME_BACKEND_NAME
OTHER CONNECTION INFO NEEDED BY DRIVER
2014-08-12 00:32:36 -0500 commented question Unable to login in openstack dashboard(Havana)

hello, Kindly edit your question and add more details, including horizon logs. Also If you could check similar questions in ask that will help you to solve the issue ASAP.

2014-08-07 01:28:22 -0500 received badge  Nice Answer (source)
2014-08-07 00:09:08 -0500 answered a question Error appeared during Puppet run:IP_nagios.pp
Hello,

This issue might be due to slow internet connectivity. To get around it you can do netns timeout fix.

edit this file /usr/share/openstack-puppet/modules/packstack/manifests/netns.pp and increase the value in timeout line like this

if $::netns_support != "true" {
        exec { "netns_dependecy_install":
            path => "/usr/bin/",
            command => "yum update -y kernel iputils iproute",
            timeout => 1800,
2014-08-05 04:10:44 -0500 answered a question dashboard showing error
Hello, 

Solved this issue by editing `/etc/openstack-dashboard/local_settings` file and comment out 

    #SESSION_ENGINE = 'django.core.cache.backends.db.DatabaseCache'

Dashboard internal error occurs Django installation does not support the caching mechanism you intend to use (django.core.cache.backends.db.DatabaseCache). 

After that a simple httpd restart fixed the issue.
2014-07-25 00:27:53 -0500 answered a question "message": "Connection to neutron failed: Maximum attempts reached", when trying to launch an instance

Hello,

Neutron server is running?. Try to restart all neutron related services. Check whether your messaging queue server is also running fine. Is there anything interesting on neutron logs?

2014-07-25 00:12:25 -0500 answered a question Can't access VMs via ping or ssh at all

Hi ,

If you aren't able to reach your instances via the floating IP address, make sure the default security group is configured properly that allows ICMP (ping) and SSH (port 22), so that you can reach the instances.

Also how you are getting three IP's for an instance. what does $nova network-list gives? . Where does 10.0.47.2 IP come from?

It is better to start troubleshooting by looking in to logs. Give more information about how you configured and regarding the setup, so that we can help you.

Thanks, Anand

2014-07-24 02:14:22 -0500 commented answer nova list shows empty table

Can you give command $nova list --all-tenants