Ask Your Question

championofcyrodiil's profile - activity

2015-08-01 02:24:14 -0500 received badge  Famous Question (source)
2015-07-14 21:40:57 -0500 received badge  Notable Question (source)
2015-06-25 13:09:34 -0500 received badge  Popular Question (source)
2015-06-25 13:09:33 -0500 received badge  Enthusiast
2015-06-23 15:56:30 -0500 commented answer cinder multi backend ceph with block device support

also i understand this creates more complexity, since ceph volumes support live-migration easily, while BlockDevice requires physically copying the volume data from one compute node to another. we don't plan to migrate these volumes, but need the disk i/o performance.

2015-06-23 15:54:10 -0500 commented answer cinder multi backend ceph with block device support

By choose, i mean that if I choose volume type "local" i get BlockDeviceDriver, and volume type "ceph" i get RbdDriver. Wondering how the FilterScheduler will handle allocating "local" volume types to node-53 cinder volume, while allocating 'rbd' volume types to any node that can connect to ceph.

2015-06-23 15:51:29 -0500 commented answer cinder multi backend ceph with block device support

Okay, I understand and that makes some sense. If I start openstack-cinder-volume service on the specific compute node, and then have two cinder-volume services... will they work properly with the same rabbitmq & mysqldb? Ultimately I would like to configure the Volume Types in horizon to choose

2015-06-23 09:54:23 -0500 asked a question cinder multi backend ceph with block device support

Added a disk to one of the hypervisors (node-53), formatted in to five extended partitions with Linux FS at 30.00GB each.
e.g. /dev/sde1,/dev/sde2,/dev/sde3,/dev/sde4,/dev/sde5

Then created a new availability zone, and added node-53 to the zone:

nova aggregate-create blockdevhosts blockdevhosts

Which was ID 2, so i then added the host with the block devices:

nova aggregate-add-host 2 node-53.domain.com

Then reconfigured my cinder.conf:

[DEFAULT]
amqp_durable_queues=False
kombu_reconnect_delay=5.0
rabbit_hosts=127.0.0.1:5673,10.10.28.2:5673,10.10.28.4:5673
rabbit_userid=nova
rabbit_password=DR3votEu
rabbit_virtual_host=/
rabbit_ha_queues=True
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=cinder
osapi_volume_listen=10.10.28.9
api_paste_config=/etc/cinder/api-paste.ini
glance_api_servers=10.10.28.6:9292
glance_api_version=2
glance_num_retries=0
glance_api_insecure=False
glance_api_ssl_compression=False
host=rbd:volumes
auth_strategy=keystone
debug=True
verbose=True
log_dir=/var/log/cinder
use_syslog=True
use_syslog_rfc_format=True
syslog_log_facility=LOG_LOCAL3

enabled_backends=ceph,blockdevhost

[ceph]
volume_backend_name=ceph
rbd_pool=volumes
rbd_user=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=False
rbd_secret_uuid=a5d0dd94-57c4-ae55-ffe0-7e3732a24455
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver
storage_availability_zone=nova

[blockdevhost]
volume_backend_name=blockdevhosts
volume_driver=cinder.volume.drivers.block_device.BlockDeviceDriver
available_devices='/dev/sde1,/dev/sde2,/dev/sde3,/dev/sde4,/dev/sde5'
storage_availability_zone=blockdevhosts

[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:Zyc8kTdz@10.10.28.6/cinder?charset=utf8&read_timeout=60
idle_timeout=3600
max_pool_size=10
max_retries=-1
max_overflow=30
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
auth_host=10.10.28.6
auth_port=35357
auth_protocol=http
admin_user=cinder
admin_password=lTqXiBID
admin_tenant_name=services
signing_dir=/tmp/keystone-signing-cinder
signing_dirname=/tmp/keystone-signing-cinder
[matchmaker_redis]
[matchmaker_ring]
[profiler]
[ssl]

Exception starting cinder-volume service:

2015-06-22 20:14:02.578 31841 INFO cinder.openstack.common.service [-] Started child 8831
2015-06-22 20:14:02.580 8831 INFO cinder.service [-] Starting cinder-volume node (version 2014.2)
2015-06-22 20:14:02.585 8831 INFO cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Starting volume driver BlockDeviceDriver (1.0.0)
2015-06-22 20:14:02.627 8831 DEBUG oslo.db.sqlalchemy.session [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/session.py:482
2015-06-22 20:14:02.755 8831 DEBUG cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Re-exporting 0 volumes init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:263
2015-06-22 20:14:02.755 8831 DEBUG cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Resuming any in progress delete operations init_host /usr/lib/python2.6/site-packages/cinder/volume/manager.py:304
2015-06-22 20:14:02.756 8831 INFO cinder.volume.manager [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Updating volume status
2015-06-22 20:14:02.757 8831 DEBUG cinder.openstack.common.processutils [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf blockdev --getsz /dev/sde1 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:158
2015-06-22 20:14:02.850 8831 DEBUG cinder.openstack.common.processutils [req-ca70e80b-1c6e-47a0-91c8-903c425ba80c - - - - -] Result was 1 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:192
2015-06-22 20:14:02.854 8831 ERROR cinder.openstack.common.threadgroup [-] Unexpected error while running command.
Command ...
(more)
2015-03-25 01:55:32 -0500 received badge  Supporter (source)
2015-03-25 01:52:36 -0500 answered a question Configure OpenStack on a WiFi-only system?

i think the portion of documentation you are looking at assume you have more than 1 network connection for a physical machine, thus mostly ethernet devices not wifi.

i would look into devstack if you want to 'play' with openstack. also the type of networking service you need, neutron? vlans?

just read the answer above, it is correct to use virtual machine to install openstack components for testing and learning.

2015-03-25 01:45:58 -0500 commented question Configure OpenStack on a WiFi-only system?

looks like bridging wlan0 with eth0 has some issues. not sure why you cant just bind the openstack processes to the wlan0 interface in the configs though.

2015-03-18 23:52:51 -0500 answered a question What causes Metadata service to be very slow?

Probably just slow crawling ec2 metadata service. Not sure why yet... image description

2015-03-11 13:22:01 -0500 received badge  Teacher (source)
2015-03-11 12:30:27 -0500 answered a question The nova-compute service is not listed in the nova service-list .How to solve that error?

If this is a fresh install and it was not configured for you by default. You need to check the installation logs. otherwise add it manually if there was no install error: http://docs.openstack.org/havana/inst...

2015-03-10 22:52:11 -0500 commented question Ways to Create Docker Instance.

also you can usa nova to try and use LXC for nova instances, but i believe this is still highly development software and not yet stable enough to run docker you might have intended.

2015-03-10 22:46:02 -0500 commented question Ways to Create Docker Instance.

I run Ubuntu x86_64 minimal, with kernel image extra packages for aufs. then docker on that. LXC is fast for starting and stopping processes. Also for isolation using c-groups. I never reboot the VMs, just the docker instances. http://championofcyrodiil.blogspot.co...

2015-03-10 22:41:23 -0500 commented question The nova-compute service is not listed in the nova service-list .How to solve that error?

did the nova-compute service show up before, and has now disappeared? or is this a fresh install; and the service has not shown up?

ps -ef, pipe grep nova and see if the compute service is actually running... if it is, i think there should be a database entry for it. /var/log/nova/compute maybe?

2015-02-18 22:42:35 -0500 received badge  Editor (source)
2015-02-16 08:49:57 -0500 answered a question How does the bridge network talk to the outside world?

Your route configuration for vmnet0 will route requests like ICMP to a 'gateway' using a routing table. Likely, you're gateway is the same IP as eth0 on Node 1. On most bash CLI, you can type 'route' to see these.

A 'bridge' is supposed to 'bridge' two circuits into 1 complete circuit. So vmnet0 and vmnet1on br0 should be able to 'talk' directly, without going through the routing gateway.

since br0 and eth0 are on the same host, and the TCP/IP protocol doesnt care about devices as much as it does IPs, requests to something like 192.168.0.1 are 'routed' to an IP that is binded with a device. This allows "public" requests on 172.16.0.0 to go through (be routed to) eth0 with the public vlan tag, while requests for 192.168.0.0 might go through the "management" vlan on eth0.

I should also add that in vlans there are multiple networks at play here, and much more going on with the ARP tables and ethernet frames. The core service for this is neutron, and most of your configuration is defined within the scope that service.

For example, the Open vSwitch service running on a compute node should be handling the vlans for your public L3 agent traffic's inbound requests from the floating IPs.

This can also be summed up with the image here: image description

2015-02-15 03:01:07 -0500 answered a question How to customize libvirt.xml for an instance ?

some settings, such as hardware video driver cirrus, vesa, or qxl. can be defined in the glance image metadata. However, this likely will need to be supported through libvirt. I have been able to get qxl and video ram modified this way in Icehouse.

I also had to modify the flavor's max video ram size.