rastogisac's profile - activity

2016-12-16 05:03:08 -0500 received badge  Famous Question (source)
2016-09-07 09:15:51 -0500 received badge  Famous Question (source)
2016-08-21 17:05:52 -0500 received badge  Notable Question (source)
2016-07-18 17:54:44 -0500 received badge  Famous Question (source)
2016-07-14 19:55:35 -0500 received badge  Famous Question (source)
2016-07-10 13:57:41 -0500 received badge  Famous Question (source)
2016-07-09 08:42:11 -0500 received badge  Famous Question (source)
2016-07-08 11:17:31 -0500 received badge  Notable Question (source)
2016-07-08 05:29:42 -0500 received badge  Famous Question (source)
2016-07-05 08:42:14 -0500 received badge  Notable Question (source)
2016-06-30 22:52:45 -0500 received badge  Notable Question (source)
2016-06-28 04:01:16 -0500 received badge  Notable Question (source)
2016-06-28 04:01:16 -0500 received badge  Popular Question (source)
2016-06-27 15:50:40 -0500 received badge  Popular Question (source)
2016-06-21 12:50:43 -0500 received badge  Notable Question (source)
2016-06-15 01:26:25 -0500 received badge  Popular Question (source)
2016-06-14 08:26:47 -0500 commented question Launched instance failed with error - Liberty release

No openvswitch as following liberty rhel guide and only private network. (no public network)

2016-06-14 08:26:04 -0500 commented question Launched instance failed with error - Liberty release

Server log 2016-06-14 18:46:39.119 13724 ERROR neutron.plugins.ml2.managers [req-10de3fd6-67a0-41ce-9377-096a 0711e9e7 041240a0cd9a49629f88bc587ac7876a 693e487029a04138abc75f4a8127117f - - -] Failed to bind po

2016-06-14 05:23:35 -0500 asked a question Launched instance failed with error - Liberty release

Dear Openstack expert I am using only private network - (Self service network ) and did the configuration as per liberty guide also used kvm libr_type. I created the private network and subnet and trying to launch a instance with command line option nova boot... while using the command, I am able to see scheduling , spawning .. but after that it throws me an error.

follow are the logs Nova-compute.log on Compute node

2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager Traceback (most recent call last):
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1564, in _allocate_network_async
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager     dhcp_options=dhcp_options)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 727, in allocate_for_instance
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager     self._delete_ports(neutron, instance, created_port_ids)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager     six.reraise(self.type_, self.value, self.tb)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 719, in allocate_for_instance
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager     security_group_ids, available_macs, dhcp_opts)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 342, in _create_port
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager     raise exception.PortBindingFailed(port_id=port_id)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager PortBindingFailed: Binding failed for port edd06d07-addf-4132-8310-1ab7c2300e0d, please check neutron logs for more information.
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [req-9830fcd3-d807-416d-8ab9-74bb8b7eb7cd 22ec63ad76594b32a982ab32c6c26816 94513aaa320943d2b9bdb7f681545c00 - - -] [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83] Instance failed to spawn
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83] Traceback (most recent call last):
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2155, in _build_resources
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]     yield resources
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]     block_device_info=block_device_info)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2512, in spawn
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance: e6832245-d2ad-4ddc-b0e7-404290b20a83]     write_to_disk=True)
2016-06-14 09:54:45.406 122194 ERROR nova.compute.manager [instance ...
(more)
2016-06-13 09:14:52 -0500 received badge  Popular Question (source)
2016-06-09 21:07:18 -0500 received badge  Popular Question (source)
2016-06-09 07:22:08 -0500 received badge  Popular Question (source)
2016-06-08 03:42:13 -0500 commented answer Cinder-volume service is not able to run directly through systemctl start openstack-cinder-volume.service

2016-06-07 15:53:14.605 87630 DEBUG oslo_concurrency.lockutils [-] Acquired semaphore "singleton_lock" lock /usr/lib/python 2.7/site-packages/oslo_concurrency/lockutils.py:198 2016-06-07 15:53:14.605 87630 DEBUG oslo_concurrency.lockutils [-] Releasing sem

2016-06-08 03:40:42 -0500 answered a question Cinder-volume service is not able to run directly through systemctl start openstack-cinder-volume.service

RHEL Manual - 3 node Configuration - Controller, Compute and Storage

Storage Cinde-volume log 2016-06-07 15:53:14.607 87630 ERROR cinder Traceback (most recent call last): 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module> 2016-06-07 15:53:14.607 87630 ERROR cinder sys.exit(main()) 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/lib/python2.7/site-packages/cinder/cmd/volume.py", line 85, in main 2016-06-07 15:53:14.607 87630 ERROR cinder launcher.launch_service(server) 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 458, in launch_service 2016-06-07 15:53:14.607 87630 ERROR cinder self._start_child(wrap) 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 431, in _start_child 2016-06-07 15:53:14.607 87630 ERROR cinder if not _is_sighup_and_daemon(signo): 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 78, in _is_sighup_and_daemon 2016-06-07 15:53:14.607 87630 ERROR cinder return _is_daemon() 2016-06-07 15:53:14.607 87630 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 60, in _is_daemon 2016-06-07 15:53:14.607 87630 ERROR cinder is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno()) 2016-06-07 15:53:14.607 87630 ERROR cinder OSError: [Errno 5] Input/output error 2016-06-07 15:53:14.607 87630 ERROR cinder 2016-06-07 15:53:14.609 87555 INFO oslo_service.service [req-bb703add-d42e-43cd-8dd2-70e6973e3bf3 - - - - -] Caught SIGHUP, stopping children 2016-06-07 15:53:14.610 87555 DEBUG oslo_concurrency.lockutils [req-bb703add-d42e-43cd-8dd2-70e6973e3bf3 - - - - -] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:198 2016-06-07 15:53:14.610 87555 DEBUG oslo_concurrency.lockutils [req-bb703add-d42e-43cd-8dd2-70e6973e3bf3 - - - - -] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:211 2016-06-07 15:53:14.611 87555 CRITICAL cinder [req-bb703add-d42e-43cd-8dd2-70e6973e3bf3 - - - - -] OSError: [Errno 5] Input/output error 2016-06-07 15:53:14.611 87555 ERROR cinder Traceback (most recent call last): 2016-06-07 15:53:14.611 87555 ERROR cinder File "/usr/bin/cinder-volume", line 10, in <module> 2016-06-07 15:53:14.611 87555 ERROR cinder sys.exit(main()) 2016-06-07 15:53:14.611 87555 ERROR cinder File "/usr/lib/python2.7/site-packages/cinder/cmd/volume.py", line 97, in main 2016-06-07 15:53:14.611 87555 ERROR cinder launcher.wait() 2016-06-07 15:53:14.611 87555 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 518, in wait 2016-06-07 15:53:14.611 87555 ERROR cinder if not _is_sighup_and_daemon(self.sigcaught): 2016-06-07 15:53:14.611 87555 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 78, in _is_sighup_and_daemon 2016-06-07 15:53:14.611 87555 ERROR cinder return _is_daemon() 2016-06-07 15:53:14.611 87555 ERROR cinder File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 60, in _is_daemon 2016-06-07 15:53:14.611 87555 ERROR cinder is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno()) 2016-06-07 15:53:14.611 87555 ERROR cinder OSError: [Errno 5 ... (more)

2016-06-08 03:35:39 -0500 received badge  Notable Question (source)
2016-06-08 02:19:23 -0500 asked a question Cinder-volume service is not able to run directly through systemctl start openstack-cinder-volume.service

Dear All I am working on openstack deployment , and facing one critical issue related to cinder-volume service.

I am not able to to run this by , systemctl start openstack-cinder-volume.service target.service

through Once I use cinde-volume as a process, it work fine. exec /bin/sh -c "exec cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/cinder-volume.log" cinder

Please help on this.

2016-06-06 07:47:47 -0500 received badge  Popular Question (source)
2016-06-06 05:19:09 -0500 commented answer Cinder Volume service is down on Controller Node

Se

cinder service-list +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------

2016-06-06 01:34:37 -0500 received badge  Editor (source)
2016-06-06 01:34:10 -0500 answered a question Cinder Volume service is down on Controller Node

Some more information :

Disk /dev/sdk: 599.6 GB, 599550590976 bytes, 1170997248 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000079c8

Device Boot Start End Blocks Id System /dev/sdk1 2048 398460927 199229440 83 Linux /dev/sdk2 * 398460928 608176127 104857600 83 Linux /dev/sdk3 608176128 796919807 94371840 83 Linux /dev/sdk4 796919808 1170997247 187038720 5 Extended /dev/sdk5 796921856 931139583 67108864 82 Linux swap / Solaris /dev/sdk6 931141632 1119885311 94371840 83 Linux

Disk /dev/sda: 1200.2 GB, 1200243695616 bytes, 2344225968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 1200.2 GB, 1200243695616 bytes, 2344225968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdh: 1200.2 GB, 1200243695616 bytes, 2344225968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@pun01cloud03 ~]# pvdisplay Logging initialised at Mon Jun 6 17:32:54 2016 Using physical volume(s) on command line. Found same device /dev/sdb with same pvid 3MsYgn5YJv6IbzUXrcnsdw0qKSZanZaw --- Physical volume --- PV Name /dev/sdb VG Name cinder-volumes PV Size 1.09 TiB / not usable 1.59 MiB Allocatable yes PE Size 4.00 MiB Total PE 286160 Free PE 286160 Allocated PE 0 PV UUID 3MsYgn-5YJv-6Ibz-UXrc-nsdw-0qKS-ZanZaw

Wiping internal VG cache


[root@pun01cloud03 ~]# vgdisplay
Logging initialised at Mon Jun  6 17:33:41 2016
Using volume group(s) on command line.

--- Volume group --- VG Name cinder-volumes System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.09 TiB PE Size 4.00 MiB Total PE 286160 Alloc PE / Size 0 / 0 Free PE / Size 286160 / 1.09 TiB VG UUID G5KKQy-DdCb-m2JQ-4MHb-Iu6W-71e8-txbE3Q

Wiping internal VG cache

[root@pun01cloud03 ~]# grep filter /etc/lvm/lvm.conf |grep -v "#" filter = [ "a/sdb/", "r/.*/"]

2016-06-06 01:20:59 -0500 commented question Cinder Volume service is down on Controller Node

Thanks Edu, Given below is the output [root@pun01cloud03 ~]# vgs cinder-volumes Logging initialised at Mon Jun 6 17:21:02 2016 Using volume group(s) on command line. VG #PV #LV #SN Attr VSize VFree cinder-volumes 1 0 0 wz--n- 1.09t 1.09t Wiping internal VG cache

2016-06-06 01:08:21 -0500 asked a question Cinder Volume service is down on Controller Node

Dear OpenStack Cinder Experts,

Need your help on solving one issue related to Cinder Service where following is the scenario. I am using three Node configuration for deploying Openstack Liberty release on RHEL OS. I am not able to start Cinder-volume Service on Storage node and also while debugging found that " it is not able to initialize Driver" and fails . given below is the Controleller and storage log. Please advise.

Cinder-Scheduler.log (Controller Node)

2016-06-01 20:23:41.760 24572 CRITICAL cinder [req-2b9c0274-a846-4767-9865-b55b337cc60d - - - - -] ProgrammingError: (_mysql_exceptions.ProgrammingError)
 (1146, "Table 'cinder.services' doesn't exist") [SQL: u'SELECT services.created_at AS services_created_at, services.updated_at AS services_updated_at, services.deleted_at AS services_deleted_at, services.deleted AS services_deleted, services.id AS services_id, services.host AS services_host, services.`binary` AS services_binary, services.topic AS services_topic, services.report_count AS services_report_count, services.disabled AS services_disabled, services.availability_zone AS services_availability_zone, services.disabled_reason AS services_disabled_reason, services.modified_at AS services_modified_at, services.rpc_current_version AS services_rpc_current_version, services.rpc_available_version AS services_rpc_available_version, services.object_current_version AS services_object_current_version, services.object_available_version AS services_object_available_version \nFROM services \nWHERE services.deleted = false AND services.topic = %s AND services.disabled = false'] [parameters: ('cinder-volume',)]
2016-06-01 20:23:41.760 24572 ERROR cinder ProgrammingError: (_mysql_exceptions.ProgrammingError) (1146, "Table 'cinder.services' doesn't exist") [SQL: u'SELECT services.created_at AS services_created_at, services.updated_at AS services_updated_at, services.deleted_at AS services_deleted_at, services.deleted AS services_deleted, services.id AS services_id, services.host AS services_host, services.`binary` AS services_binary, services.topic AS services_topic, services.report_count AS services_report_count, services.disabled AS services_disabled, services.availability_zone AS services_availability_zone, services.disabled_reason AS services_disabled_reason, services.modified_at AS services_modified_at, services.rpc_current_version AS services_rpc_current_version, services.rpc_available_version AS services_rpc_available_version, services.object_current_version AS services_object_current_version, services.object_available_version AS services_object_available_version \nFROM services \nWHERE services.deleted = false AND services.topic = %s AND services.disabled = false'] [parameters: ('cinder-volume',)]

Cinder-volume log


 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2239
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager [req-357b18bf-274c-4a8e-a65e-54f8362cd542 - - - - -] Failed to initialize driver.
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager Traceback (most recent call last):
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in init_host
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager     self.driver.check_for_setup_error()
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager     return f(*args, **kwargs)
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269, in check_for_setup_error
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager     lvm_conf=lvm_conf_file)
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 86, in __init__
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager     if self._vg_exists() is False:
2016-06-03 23:12:02.876 3171 ERROR cinder.volume.manager   File "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line ...
(more)
2016-06-06 00:57:52 -0500 received badge  Enthusiast
2016-06-03 09:18:57 -0500 asked a question Failed to initialize driver cinder volume

Dea631 3152 DEBUG cinder.volume.manager [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Cinder Volume DB check: vol_db_empty=True __init__ /usr/lib/python2.7/site-packages/cinder/volume/manager.py:219 2016-06-03 23:12:02.641 3152 DEBUG cinder.volume.drivers.lvm [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Attempting to initialize LVM driver with the following target_driver: cinder.volume.targets.lio.LioAdm __init__ /usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py:93 2016-06-03 23:12:02.646 3152 DEBUG oslo_concurrency.processutils [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Running cmd (subprocess): cinder-rtstool verify execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:230 2016-06-03 23:12:02.711 3152 DEBUG oslo_concurrency.processutils [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] CMD "cinder-rtstool verify" returned: 0 in 0.064s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:260 2016-06-03 23:12:02.712 3152 INFO cinder.volume.manager [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Image-volume cache disabled for host http://pun01cloud01.ad.infosys.com. 2016-06-03 23:12:02.713 3152 INFO oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Starting 1 workers 2016-06-03 23:12:02.714 3152 INFO oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Started child 3171 2016-06-03 23:12:02.715 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Full set of CONF: wait /usr/lib/python2.7/site-packages/oslo_service/service.py:505 2016-06-03 23:12:02.715 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] ************************* log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2216 2016-06-03 23:12:02.716 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2217 2016-06-03 23:12:02.716 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] command line args: ['--debug'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2218 2016-06-03 23:12:02.716 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] config files: ['/usr/share/cinder/cinder-dist.conf', '/etc/cinder/cinder.conf'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2219 2016-06-03 23:12:02.716 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] ================================================================================ log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2220 2016-06-03 23:12:02.716 3171 INFO cinder.service [-] Starting cinder-volume node (version 7.0.1) 2016-06-03 23:12:02.717 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] allow_availability_zone_fallback = False log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.717 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] allowed_direct_url_schemes = [] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.717 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] api_paste_config = api-paste.ini log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.717 3171 INFO cinder.volume.manager [req-357b18bf-274c-4a8e-a65e-54f8362cd542 - - - - -] Starting volume driver LVMVolumeDriver (3.0.0) 2016-06-03 23:12:02.717 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] api_rate_limit = True log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.718 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] auth_strategy = keystone log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.718 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] az_cache_duration = 3600 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.718 3152 DEBUG oslo_service.service [req-c74daae4-a459-493f-9315-963ec05decf9 - - - - -] backend_host = None log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2229 2016-06-03 23:12:02.718 3152 DEBUG ... (more)

2016-06-03 06:54:32 -0500 asked a question cinder volume not geneated on both Controler and Storage node

Dear Cinder Experts

Please give some inputs, I am not able see any volume.log file under cinder folder and not able to start Cinder-colume Service on storage node.. both servers are synchronized following is the outputs

Dear cinder Experts,

Controller Node

[root@pun01cloud01 admin]# cinder service-list +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | http://pun01cloud01.ad.infosys.com | nova | enabled | up | 2016-06-03T17:18:06.000000 | - | +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+

No Cinder-volume generated

Storage Node

not able to start Cinder-volume Service , I am following RHEL liberty guide

Not volume.log generated on both Controller , Storage node.

2016-06-03 06:34:33 -0500 asked a question cinder volume not generated

Dear Cinder experts Greetings. I am working on deploying Centos Open stack and facing an interesting issue on my cinder component, given that I am using three node deployment architecture controller , Compute and Storage Node. following are some data

No volume .log file is generate so I even can not check what went wrong , need Immediate help Controller Node My Controller Node is working Fine, but I am not able to see any data for volume.log in /var/log/cinder folder. I am able to See api.log, Cinder-manage.log,scheduler.log which is given as below, [DEFAULT] Controller Node cinder.conf DEFAULT]

From cinder

Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. (integer value) backup_metadata_version = 2

The number of chunks or objects, for which one Ceilometer notification will be sent (integer value) backup_object_number_per_notification = 10

Interval, in seconds, between two progress notifications reporting the backup status (integer value) backup_timer_interval = 120

The maximum number of items that a collection resource returns in a single response (integer value) osapi_max_limit = 1000

Base URL that will be presented to users in links to the OpenStack Volume API (string value) Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix osapi_volume_base_URL = <none>

Ceph configuration file to use. (string value) backup_ceph_conf = /etc/ceph/ceph.conf

The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. (string value) backup_ceph_user = cinder

The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. (integer value) backup_ceph_chunk_size = 134217728

The Ceph pool where volume backups are stored. (string value) backup_ceph_pool = backups

RBD stripe unit to use when creating a backup image. (integer value) backup_ceph_stripe_unit = 0

RBD stripe count to use when creating a backup image. (integer value) backup_ceph_stripe_count = 0

If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. (boolean value) restore_discard_excess_bytes = true

File with the list of available smbfs shares. (string value) smbfs_shares_config = /etc/cinder/smbfs_shares

Default format that will be used when creating volumes if no volume format is specified. (string value) Allowed values: raw, qcow2, vhd, vhdx smbfs_default_volume_format = qcow2

Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. (boolean value) smbfs_sparsed_volumes = true

Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. (floating point value) smbfs_used_ratio = 0.95

This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. (floating point value) smbfs_oversub_ratio = 1.0

Base dir containing mount points for smbfs shares. (string value) smbfs_mount_point_base = $state_path/mnt

Mount options passed to the smbfs client. See mount.cifs man page for details. (string value) smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775

Compression algorithm (None to ... (more)

2016-06-03 06:24:18 -0500 asked a question Cinder Service not running

Dear Expert I am working on deploying Centos Open stack and facing an interesting issue on my cinder component, given that I am using three node deployment architecture controller , Compute and Storage Node. following are some data

No volume .log file is generate so I even can not check what went wrong , need Immediate help Controller Node My Controller Node is working Fine, but I am not able to see any data for volume.log in /var/log/cinder folder. I am able to See api.log, Cinder-manage.log,scheduler.log which is given as below, [DEFAULT]

From cinder

Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. (integer value)

backup_metadata_version = 2

The number of chunks or objects, for which one Ceilometer notification will

be sent (integer value)

backup_object_number_per_notification = 10

Interval, in seconds, between two progress notifications reporting the backup

status (integer value)

backup_timer_interval = 120

The maximum number of items that a collection resource returns in a single

response (integer value)

osapi_max_limit = 1000

Base URL that will be presented to users in links to the OpenStack Volume API

(string value)

Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix

osapi_volume_base_URL = <none>

Ceph configuration file to use. (string value)

backup_ceph_conf = /etc/ceph/ceph.conf

The Ceph user to connect with. Default here is to use the same user as for

Cinder volumes. If not using cephx this should be set to None. (string value)

backup_ceph_user = cinder

The chunk size, in bytes, that a backup is broken into before transfer to the

Ceph object store. (integer value)

backup_ceph_chunk_size = 134217728

The Ceph pool where volume backups are stored. (string value)

backup_ceph_pool = backups

RBD stripe unit to use when creating a backup image. (integer value)

backup_ceph_stripe_unit = 0

RBD stripe count to use when creating a backup image. (integer value)

backup_ceph_stripe_count = 0

If True, always discard excess bytes when restoring volumes i.e. pad with

zeroes. (boolean value)

restore_discard_excess_bytes = true

File with the list of available smbfs shares. (string value)

smbfs_shares_config = /etc/cinder/smbfs_shares

Default format that will be used when creating volumes if no volume format is

specified. (string value)

Allowed values: raw, qcow2, vhd, vhdx

smbfs_default_volume_format = qcow2

Create volumes as sparsed files which take no space rather than regular files

when using raw format, in which case volume creation takes lot of time.

(boolean value)

smbfs_sparsed_volumes = true

Percent of ACTUAL usage of the underlying volume before no new volumes can be

allocated to the volume destination. (floating point value)

smbfs_used_ratio = 0.95

This will compare the allocated to available space on the volume destination.

If the ratio exceeds this number, the destination will no longer be valid.

(floating point value)

smbfs_oversub_ratio = 1.0

Base dir containing mount points for smbfs shares. (string value)

smbfs_mount_point_base = $state_path/mnt

Mount options passed to the smbfs client. See mount.cifs man page for

details. (string value)

smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775

Compression algorithm (None to disable) (string value)

backup_compression_algorithm = zlib

Use thin ...

(more)
2016-05-23 02:04:14 -0500 asked a question keystone gives authentication error

openstack service create \

--name keystone --description "OpenStack Identity" identity The request you have made requires authentication. (HTTP 401) (Request-ID: req-d652ffa9-ceb6-41c8-87d6-3a8bff5033cf)