Ask Your Question

gfidente's profile - activity

2015-03-12 02:42:42 -0500 received badge  Great Answer (source)
2015-03-12 02:42:42 -0500 received badge  Guru (source)
2014-10-20 00:14:06 -0500 received badge  Guru (source)
2014-10-20 00:14:06 -0500 received badge  Great Answer (source)
2014-01-07 03:09:42 -0500 received badge  Good Answer (source)
2013-11-12 15:04:30 -0500 received badge  Nice Answer (source)
2013-11-01 20:56:34 -0500 received badge  Good Answer (source)
2013-11-01 20:56:11 -0500 received badge  Good Answer (source)
2013-08-16 04:34:34 -0500 commented question When creating a volume from an image, openstack always uses root disk space?

which backend driver are you using and what are you referring to with "my root volume" ?

2013-08-16 04:33:16 -0500 commented question F16 cow images from FedoraPeople/Rackspace broken?

the updated fedora cloud images are now made available via an official page from the fedora website, I recommend you try those: http://fedoraproject.org/en/get-fedora-options#clouds

2013-08-16 04:19:16 -0500 received badge  Critic (source)
2013-08-16 04:15:22 -0500 commented answer Get a list of virtual machines from stack using heat API

Given that Heat implements the AWS CloudFormation API (https://wiki.openstack.org/wiki/Heat) you could probably learn more about the API calls in the AWS docs (http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/Welcome.html?r=7078)

2013-07-23 01:39:03 -0500 received badge  Enlightened (source)
2013-07-23 01:39:03 -0500 received badge  Good Answer (source)
2013-07-06 02:33:25 -0500 received badge  Nice Answer (source)
2013-06-26 01:16:04 -0500 received badge  Nice Answer (source)
2013-06-22 07:40:42 -0500 received badge  Nice Answer (source)
2013-06-12 09:52:58 -0500 commented answer cannot remove volume

@polslinux I'm not sure how you got into such a situation but still, ensures the volume target definition files have been deleted and are not imported into your tgtd config and then delete the volumes from the database ... also remove the volume from the foreign tables

2013-06-12 09:49:25 -0500 answered a question Get a list of virtual machines from stack using heat API

I suppose you meant to ask how to get a list of instances composing a stack.

For that I'd try the following, assuming you have some wordpress stack running:

# heat resource-list wordpress
| DatabaseTemplate | AWS::CloudFormation::Stack | CREATE_COMPLETE | 2013-06-12T13:52:17Z |
| WebServer        | AWS::EC2::Instance         | CREATE_COMPLETE | 2013-06-12T13:52:26Z |
# heat resource-show wordpress WebServer
[...]
| logical_resource_id    | WebServer |
| physical_resource_id   | 181b6265-21a6-468e-b3c9-939956621fc3 |
[...]

that physical_resource_id is the id of a nova instance

2013-06-12 05:58:00 -0500 answered a question heat stack-list returns Invalid OpenStack Identity credentials

works fine for me, I think this is a misconfiguration issue

Looks like the password it is passing is 'secrete' , see X-Auth-Key: secrete , is that mistyped maybe?

2013-06-12 05:46:47 -0500 answered a question unable to install glance-api for Grizzly on Redhat

they are part of openstack-glance

# rpm -qf /usr/bin/glance-api 
openstack-glance-2013.1-3.el6ost.noarch
# rpm -qf /usr/bin/glance-registry
openstack-glance-2013.1-3.el6ost.noarch
2013-06-10 09:45:11 -0500 received badge  Editor (source)
2013-05-20 06:06:12 -0500 answered a question cannot remove volume

You should really try to figure out why it is failing in the first place. Set debug=True and verbose=True in cinder.conf, restart the services and make a further delete attempt to see if anything useful pops up in the volume log file.

You can also try using the cinder force-delete command, which bypasses some checks.

Last but not least, if you're sure the volume isn't there anymore and there are no references to it (eg. in /etc/cinder/volumes), you can hack into the database and delete it from the volumes_metadata and volumes tables.

2013-05-10 07:01:33 -0500 commented answer Upload Snapshots to New Openstack Instance

@tim-bell sure you could, for example, copy the images manually and then re-create from the local files but the question seems to discuss a one-time migration need. For instance, this also requires to manually relaunch the instances in the new environment, it really isn't a rock solid solution.

2013-05-10 05:51:52 -0500 answered a question Upload Snapshots to New Openstack Instance

The files in /var/lib/nova are the backing disk images of your running instances. You can't migrate the running instances (unless the environment was configured for live migration)

You can instead:

  1. create snapshots of the running instances with "nova image-create"
  2. publish your /var/lib/glance/images via http OR use glance image-download to copy the snapshots somewhere and publish that directory via http
  3. import those snaps in the new environment by using "glance image-create --copy-from"

[EDITED after @lloydde comments]

2013-05-10 05:16:55 -0500 received badge  Supporter (source)
2013-04-16 23:30:43 -0500 received badge  Nice Answer (source)
2013-04-12 09:34:30 -0500 received badge  Teacher (source)
2013-04-12 09:21:45 -0500 answered a question How are dnsmasq DNS entries generated and FQDN then sent to instances ?

dnsmasq explicitly prevents what you're trying from working, see http://www.thekelleys.org.uk/dnsmasq/docs/setup.html :

By default the names must no have a domain part, ie they must just be a alphanumeric name, without any dots.

You can leave empty the dhcp_domain option and create an additional network with nova-manage, assigning to each a different 'label'.

Such a label is used as a 'tag' for the dnsmasq's dhcp-range argument. You can then use a custom dnsmasq config file (see dnsmasq_config_file option) to set different DHCP options (like the domain suffix) for the different tags.

2013-04-12 08:22:42 -0500 answered a question Using blade servers with attached storage for compute nodes

If I understand correctly the question, you want to use the attched storage available on the Compute nodes to host the live VMs images (ephemeral storage).

If you do not share the attached storage shared between the nodes, you can achieve it by:

  • mounting the local storage under the libvirt's storage pool mountpoint
  • setup the local storage as an LVM vg and then configure libvirt_images_type=lvm and libvirt_images_volume_group=GROUPNAME

If you do share the attached storage between the nodes, the only option I see is:

  • a shared/distributed file system, like gluster or ceph, mounted under the libvirt's storage pool mountpoint