Ask Your Question

job's profile - activity

2017-08-11 04:56:02 -0600 received badge  Famous Question (source)
2017-08-11 04:56:02 -0600 received badge  Notable Question (source)
2016-02-09 15:35:04 -0600 received badge  Famous Question (source)
2016-02-09 15:35:04 -0600 received badge  Notable Question (source)
2016-02-09 15:35:04 -0600 received badge  Popular Question (source)
2015-09-03 17:32:08 -0600 received badge  Famous Question (source)
2015-08-18 14:37:01 -0600 commented question How to autoscale down gracefully with heat

The oldest VM would be deleted when downscaling. Check this

2015-08-13 20:44:52 -0600 received badge  Famous Question (source)
2015-08-11 00:14:31 -0600 received badge  Famous Question (source)
2015-07-03 03:52:45 -0600 received badge  Great Question (source)
2015-06-16 23:25:05 -0600 received badge  Notable Question (source)
2015-06-16 23:25:05 -0600 received badge  Famous Question (source)
2015-06-16 23:25:05 -0600 received badge  Popular Question (source)
2015-06-08 04:30:12 -0600 received badge  Famous Question (source)
2015-05-19 10:11:50 -0600 received badge  Necromancer (source)
2015-05-10 13:25:36 -0600 asked a question How do outputs work for SoftwareDeployments


Do OS::Heat::SoftwareDeployments outputs work the same as OS::Heat::SoftwareDeployment?

When using outputs with OS::Heat::SoftwareDeployments, it returns an empty value. However, just changing the resource type to OS::Heat::SoftwareDeployment, apply to only one server, use the same OS::Heat::SoftwareConfig, the value is there. I tested it with:

    type: OS::Heat::SoftwareConfig
      group: script
      - name: myinput
        description: My input
      - name: result
        description: Output
      config: |
        #!/bin/sh -x
        echo -n "Here it goes" > $heat_outputs_path.result

    # type: OS::Heat::SoftwareDeployment
    type: OS::Heat::SoftwareDeployments
      config: {get_resource: myconfig}
        wb-server: {get_resource: wb_server}
        # wb-master: { get_attr: [wb_core, master] }
        # wb-backup: { get_attr: [wb_core, backup] }
        myinput: foo

    description: software config output
    value: { get_attr: [deploy, result] }
2015-05-05 06:07:40 -0600 received badge  Famous Question (source)
2015-05-02 12:10:23 -0600 answered a question Heat autoSacalingGroup attributes current_size, outputs, and outputs_list are always blank

I think you are missing the attribute names for outputs and outputs_list. Doc says:

outputs: A map of resource names to the specified attribute of each individual resource.

outputs_list: A list of the specified attribute of each individual resource. Requires heat_template_version: 2014-10-16 or higher.

Check out this template

2015-05-02 09:41:46 -0600 received badge  Notable Question (source)
2015-05-02 09:41:42 -0600 answered a question How can I assigned specific IP to VM using HEAT?

It seems your template is wrong. 'networks' property takes a list of networks. If you want to specify a fixed_ip for a specific network, you can specify like below.

    - network: {get_param: net_a}
      fixed_ip: {get_param: net_ip}

Answer by Rabi Mishra in Launchpad

2015-04-27 16:42:58 -0600 asked a question Credentials for reading STACK outputs from VM inside the stack through HEAT API?

How whould you handled heat authorization within a stack without compromising security?

I would like to be able to read some stack's output from a vm defined in the heat template. Particularly, IP list of an autoscaling group, which might vary during STACK lifecycle. Since I only have access to nova-network, I would like to use this output along with scaling webhooks in a controller VM to gracefully manage autoscaling. That's why I was thinking in passing the same Keystone credentials I used to create the STACK to the controller VM that will do this job for the stack. However, I'm also thinking about security: If the VM is compromise, those credentials could give access beyond the stack.

I read Keystone V3 has domains but what if I don't have admin credentials to create users.

2015-04-20 20:43:05 -0600 received badge  Popular Question (source)
2015-04-19 19:06:59 -0600 asked a question How can I assigned specific IP to VM using HEAT?

Using JUNO in UBUNTU 14, I tried the following in OS::Nova::Server resource:

        - fixed_ip: { get_param: specific_ip }
        - network: { get_param: net }

I got the following error:

Stack CREATE FAILED (haproxy): Resource CREATE failed: BadRequest: Bad network format: missing 'uuid'


I tried with "uuid".

        - fixed_ip: { get_param: public_ip }
        - uuid: { get_param: net }

I passed the net parameter with " net=$(nova network-list | grep -i -m 1 net | awk '{print $2}')"

I got the same error. I should mention I'm using nova-network

2015-04-14 17:24:30 -0600 received badge  Notable Question (source)
2015-04-14 17:24:30 -0600 received badge  Popular Question (source)
2015-04-14 17:24:29 -0600 answered a question WARNING VMwareVcVmdkDriver -1.3.0 driver is uninitialized

Vmware SDK web service were not running in vcenter because hostname resolution. That was the reason of the "Service Unavailable" error. I had to add the correct entry for the hostname and IP to /etc/hosts.

VMware KB

However, I'm still getting the an error when trying to create a new volume:

inderException: Failed to create api volume flow
2015-04-12 23:23:31 -0600 asked a question WARNING VMwareVcVmdkDriver -1.3.0 driver is uninitialized

I'm using Juno, Ubuntu 14, vsphere 5.5. and VMwareVCDriver. Cinder services are running but when I tried to create one volume, it fails with (cinder-api.log):

 CinderException: Failed to create api volume flow

When I checked the rest of the log files, there was the following message every few minutes:

WARNING cinder.volume.manager [-] Unable to update volume replication status, VMwareVcVmdkDriver -1.3.0  driver is uninitialized.

And when cinder-volume is restarted, the log shows:

ERROR suds.client
Error encountered during initialization of driver: VMwareVcVmdkDriver
Exception: (503, u'Service Unavailable')

My cinder file is:

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = *****
my_ip =
verbose = True
debug = True

vmware_host_ip =
vmware_host_username = root
vmware_host_password = ****
vmware_wsdl_location = file:///opt/vmware/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl

rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes

connection = mysql://cinder:*****@controller/cinder

auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = ****

glance_host = controller
2015-04-12 21:19:57 -0600 commented answer cinder vmware

I have the same error message with Juno in Ubuntu 14. "vmware_wsdl_location" and python-suds are already there in t cinder.conf and the OS

2015-04-07 11:28:56 -0600 received badge  Taxonomist
2015-04-06 06:35:21 -0600 received badge  Good Question (source)
2015-04-06 06:21:17 -0600 received badge  Famous Question (source)
2015-03-26 00:19:55 -0600 commented question No DHCP lease ml2 linuxbridge vxlan l2population

I'm trying to figure out a similar lab. There are several config files in a multinode deployment and it is no clear to me where some parameters should be set. You could check this answer

2015-03-17 11:21:48 -0600 answered a question not able to install openstack-nova-compute sysfsutils

For Centos, EPEL repo may not be installed.

yum install epel-release

2015-02-22 04:25:44 -0600 received badge  Notable Question (source)
2015-02-17 11:09:58 -0600 received badge  Popular Question (source)
2015-02-17 01:41:26 -0600 received badge  Nice Question (source)
2015-02-16 12:59:36 -0600 commented question How to autoscale down gracefully with heat

Thanks for the link but I already have a autoscaling demo deployment with heat. However. I'm wondering how to avoid user's connection disruptions when downscaling, considering vm's are just deleted when the policy is triggered.