Heat HOT template: Wait for volume (block_device_mapping) to be created

asked 2015-11-04 04:47:44 -0500

Jorge Tudela gravatar image

Hi all,

I have a template which creates a network with two instances, those instances boot from volumes(block_device_mapping), previously created. The thing is, I first create the volumes, and then I use the the option "depends_on:" in the server instances to wait for the volumes, but apparently, It doesnt wait for the volumes to be created (they take around 10 minutes), so I get an error that volumes are not ready yet.

Im using heat_template_version: 2013-05-23.

Any hint on how to make the instances wait for the columes to be created?

Here is the template:

heat_template_version: 2013-05-23

description: Test

parameters:
  network_name:
    type: string
    description: 'Name of the Network'
  network_range:
    type: string
    label: Network CIDR
    description: 'Range of the Network'
  network_gw:
    type: string
    description: 'GW of the Network'
  router_name:
    type: string
    description: 'Name of the Router'
  node1_volume_name:
    type: string
    description: 'Name of the Node1 Volume'
  node2_volume_name:
    type: string
    description: 'Name of the Node2 Volume'
  node1_instance_name:
    type: string
    description: 'Name of the node1 instance'
  node2_instance_name:
    type: string
    description: 'Name of the Node2 instance'
resources:
  network_01:
    type: OS::Neutron::Net
    properties:
      admin_state_up: true
      name: { get_param: network_name }
  subnet_01:
    type: OS::Neutron::Subnet
    properties:
      name: { get_param: network_name }
      cidr: { get_param: network_range }
      enable_dhcp: true
      gateway_ip: { get_param: network_gw }
      network_id: { get_resource: network_01 }
  router_01:
    type: OS::Neutron::Router
    properties:
      admin_state_up: true
      name: { get_param: router_name }
  router_01_gw:
    type: OS::Neutron::RouterGateway
    properties:
      network_id: '150912f4-8d5f-4957-811e-6b9dc633831e'
      router_id: { get_resource: router_01 }
  router_int0:
    type: OS::Neutron::RouterInterface
    properties:
      router_id: { get_resource: router_01 }
      subnet_id: { get_resource: subnet_01 }
  node1_port0:
    type: OS::Neutron::Port
    properties:
      admin_state_up: true
      network_id: { get_resource: network_01 }
      security_groups:
        - default
        - java-application-servers
        - ssh
    depends_on: network_01
  node2_port0:
    type: OS::Neutron::Port
    properties:
      admin_state_up: true
      network_id: { get_resource: network_01 }
      security_groups:
        - default
        - WindowsServers
    depends_on: network_01
  node1_volume:
    type: OS::Cinder::Volume
    properties:
      description: 'user: Volume for Node1'
      image: 'Ubuntu14LTS'
      name: { get_param: node1_volume_name }
      size: 10
      availability_zone: nova
    depends_on: network_01
  node2_volume:
    type: OS::Cinder::Volume
    properties:
      description: 'user: Volume for Node2'
      image: 'Windows2008R2StandardSysprep'
      name: { get_param: node2_volume_name }
      size: 40
      availability_zone: nova
    depends_on: network_01
  node1_instance:
    type: OS::Nova::Server
    properties:
      name: { get_param: node1_instance_name }
      flavor: m1.xsmall
      block_device_mapping2:
        - volume_id: { get_resource: node1_volume }
          device_name: vda
      availability_zone: nova
      networks:
       - network: { get_resource: node1_port0 }
      key_name: jtudela
    depends_on: node1_volume
  node2_instance:
    type: OS::Nova::Server
    properties:
      name: { get_param: node2_instance_name }
      flavor: m1.medium
      block_device_mapping2:
       - volume_id: { get_resource: node2_volume }
         device_name: vda
      availability_zone: nova
      networks:
       - port: { get_resource: node2_port0 }
    depends_on: node2_volume
  node1_publicIP:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: '150912f4-8d5f-4957-811e-6b9dc633831e'
      port_id: { get_resource: node1_port0 }
    depends_on: node1_instance
  node2_publicIP:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: '150912f4-8d5f-4957-811e-6b9dc633831e'
      port_id: { get_resource: node2_port0 }
    depends_on: node2_instance
outputs:
  node1_public_ip:
    description: Floating IP address of node1 in public network
    value: { get_attr: [ node1_instance, floating_ip_address ] }
  node2_public_ip:
    description: Floating IP address of node2 in public network
    value: { get_attr: [ node2_instance, floating_ip_address ] }
edit retag flag offensive close merge delete

Comments

The depends_on is actually redundant here, because { get_resource: node1_volume } also adds a dependency. The OS::Cinder::Volume resources shouldn't get marked COMPLETE until Cinder reports they are in the "available" state. Please raise a bug if this is not the case.

zaneb gravatar imagezaneb ( 2016-02-01 11:53:32 -0500 )edit