Ask Your Question

erivni's profile - activity

2015-08-26 14:14:02 -0500 received badge  Famous Question (source)
2015-04-29 08:10:54 -0500 received badge  Famous Question (source)
2015-02-02 12:00:02 -0500 received badge  Notable Question (source)
2015-01-18 05:49:57 -0500 received badge  Notable Question (source)
2014-12-30 05:21:12 -0500 commented question Access a container from a nova server inside heat

Hi, I'm using a public container. Here's the code:

swift_container: type: OS::Swift::Container properties: name: myContainer

I can't access this container from an inside user(AWS::IAM::User in the template). I only can have access by passing my os-username, os-tenant and os-password.

2014-12-30 04:51:21 -0500 received badge  Popular Question (source)
2014-12-30 01:15:28 -0500 received badge  Supporter (source)
2014-12-30 01:15:10 -0500 commented answer Get outputs from nested stack

Thanks! It's exactly what I need!

2014-12-29 04:55:02 -0500 received badge  Popular Question (source)
2014-12-25 03:58:31 -0500 asked a question Access a container from a nova server inside heat

Hi,

I want to create a template which deploys a swift container and a nova server. The user_data inside the server will upload/download files from this container.

I tried to create a user by using: AWS::IAM::User, AWS::IAM::AccessKey and OS::Heat::AccessPolicy resources but when I try to access the container I get an authentication failure. Is there a way to access the container without passing the os username and password credentials as a template parameters?

Thanks for your help,

Einav.

2014-12-24 04:14:59 -0500 asked a question Get outputs from nested stack

Hi,

Is there a way in Heat to extract nested template outputs?

Here's a simple example for nested stacks.

I found this document Template composition on Openstack docs.

But, is there a way to get nested template outputs even if they are not attributes for nested resources?

Thanks!

2014-10-02 01:40:54 -0500 received badge  Scholar (source)
2014-10-02 01:35:47 -0500 received badge  Editor (source)
2014-10-02 01:32:47 -0500 answered a question Create a highly available Auto Scaling Group

Thanks a lot! it solved the bug..

But, there's still a problem with my solution. My Ceilometer alarms are always on ‘insufficient data’ state. According to your wordpress post I guess that my problem is with ‘matching_metadata’ property.

I using the following metadata:

instance metadata:{groupname:‘myGroup'’}

alarm matching_metadata: {metadata.metering.groupname: ‘myGroup'}

but it's not working..

Also, I would like to ask if there's another way to implement a HA scaling group? Ceilometer does not support heatbeat meter so when I’m trying to use Ceilometer with cfn-push-stat it fails with KeyError: 'counter_name'. Also, I couldn’t use LaunchConfiguration resource because that way I won’t be able to restart a failed server since there is an instance_id property inside HARestarter which I can’t get from an AWS ScalingGroup resource..

Thanks again.

2014-09-29 18:53:44 -0500 received badge  Famous Question (source)
2014-09-28 03:58:57 -0500 received badge  Good Question (source)
2014-09-28 03:58:53 -0500 received badge  Notable Question (source)
2014-09-24 09:28:06 -0500 received badge  Popular Question (source)
2014-09-24 08:15:07 -0500 received badge  Nice Question (source)
2014-09-17 15:42:15 -0500 received badge  Nice Question (source)
2014-09-17 15:32:32 -0500 received badge  Famous Question (source)
2014-09-17 04:41:45 -0500 asked a question Create a highly available Auto Scaling Group

I trying to deploy a HA scaling group using AutoscalingGroup and HARestarter resources. I combined these 2 templates:

Here are my heat templates for a single HA Scaling group.

I'm using Redhat Icehouse Openststack (heat 0.2.9) with a Centos65 image.

While deploying this stack I'm getting this error regarding autoscaling ceilometer alarm :

/var/log/heat/engine.log:

2014-09-17 12:21:38.952 11734 INFO heat.engine.watchrule [req-868d212b-3e8b-4a30-98ba-39f6c5b65bf0 None] WATCH: stack:a0f6d7ca-b617-42b3-b545-826944f70021, watch_name:haautoscaling-web_server_group-avw5lve4v6pf-ew4m6jnagz7x-2vs7tvmcaomn-heartbeat_failure_alarm, new_state:NORMAL
2014-09-17 12:21:38.953 11734 INFO heat.engine.watchrule [req-868d212b-3e8b-4a30-98ba-39f6c5b65bf0 None] no action for new state NORMAL
2014-09-17 12:21:38.967 11734 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.56.165.25
2014-09-17 12:21:39.133 11734 INFO heat.engine.watchrule [req-0cc46c90-c7c1-4e54-b6ba-db05307400ad None] WATCH: stack:e17b2e84-ae5a-4dd1-9bda-fbb3c8906cf3, watch_name:haautoscaling-web_server_group-avw5lve4v6pf-4sy5bnthbvos-oipuypiu3kye-heartbeat_failure_alarm, new_state:NORMAL
 2014-09-17 12:21:39.133 11734 INFO heat.engine.watchrule [req-0cc46c90-c7c1-4e54-b6ba-db05307400ad None] no action for new state NORMAL
 2014-09-17 12:21:39.146 11734 ERROR heat.openstack.common.loopingcall [-] in fixed duration looping call
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall Traceback (most recent call last):
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/openstack/common/loopingcall.py", line 76, in _inner
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     self.f(*self.args, **self.kw)
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 240, in periodic_watcher_task
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     self.check_stack_watches(sid)
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/engine/service.py", line 229, in check_stack_watches
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     actions = rule.evaluate()
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/engine/watchrule.py", line 234, in evaluate
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     return self.run_rule()
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/engine/watchrule.py", line 241, in run_rule
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     new_state = self.get_alarm_state()
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall   File "/usr/lib/python2.7/site-packages/heat/engine/watchrule.py", line 224, in get_alarm_state
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall     fn = getattr(self, 'do_%s' % self.rule['Statistic'])
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall KeyError: 'Statistic'
 2014-09-17 12:21:39.146 11734 TRACE heat.openstack.common.loopingcall

It seems like heat-engine trying to parse ceilometer cpu_alarm_high/low as a cloudwatch alarm. Using each feature alone - autoscaling with ceilometer and restart with cloudwatch, worked fine and never gave this error.

Is it possible to combine between those resources? Is Ceilometer going to ... (more)

2014-09-16 23:38:30 -0500 received badge  Student (source)
2014-09-16 23:37:02 -0500 received badge  Famous Question (source)
2014-09-13 03:22:25 -0500 received badge  Popular Question (source)
2014-09-13 03:22:25 -0500 received badge  Notable Question (source)
2014-08-04 14:52:01 -0500 received badge  Notable Question (source)
2014-07-29 05:56:30 -0500 received badge  Enthusiast
2014-07-27 21:41:32 -0500 received badge  Popular Question (source)
2014-07-02 04:19:36 -0500 answered a question HARestarter deletes the instance and while creating give Port ID error

I have the same 2 problems. I managed to solve the second one by configuring the port's details inside the Nova::Server itself as follow:

server: type: OS::Nova::Server properties: name: restart_instance flavor: { get_param: flavor } image: { get_param: image } key_name: { get_param: key_name } availability_zone: { get_param: availability_zone } networks: - network: { get_param: network_id } security_groups: - { get_param: security_group_id } user_data: | #!/bin/bash -x touch /root/test echo "Hello, World!" > /root/test

Unfortunatly, I couldn't found a solution to the first problem, my instance is constantly being restarted

Einav.

2014-07-02 02:56:31 -0500 asked a question HARestarter with HOT

Hi all,

I'm trying to create a template which will restart a server on shut down. I'm using a OS::Heat::CWLiteAlarm as an alarm notifier and a OS::Heat::HARestarter as a restart policy. My problem is that heat-engine restarts my server even if my server is up and running. What am I missing?

Thanks, Einav.

-- My template --

heat_template_version: 2013-05-23

description: >
  Simple demonstrating the use of the HARestarter resource, triggering restart 
  via a pre-signed URL

parameters:

  key_name:
    type: string
    description: Name of an existing key pair to use for the instance
  flavor:
    type: string
    description: flavor for the instance to be created
    constraints:
      - allowed_values: [m1.medium, m1.large]
        description: Value must be one of 'm1.medium' or 'm1.large'
  image:
    type: string
    description: Name of the image to use for the instance
  availability_zone:
    type: string 
    description: The Availability Zone in which to launch the instance.
  network_id:
    type: string
    description: ID of network into which servers get deployed
  security_group_id:
    type: string
    description: ID of security group for this instance port

resources:

  server:
    type: OS::Nova::Server
    properties:
      name: restart_instance
      flavor: { get_param: flavor }
      image: { get_param: image }
      key_name: { get_param: key_name }
      availability_zone: { get_param: availability_zone }
      networks:
        - network: { get_param: network_id }
      security_groups: 
        - { get_param: security_group_id }
      user_data:  |
            #!/bin/bash -x
            touch /root/test
            echo "Hello, World!" > /root/test

  restart_policy:
    type: OS::Heat::HARestarter
    properties:
      InstanceId: {get_resource: server} 

  heartbeat_failure_alarm:
    type: OS::Heat::CWLiteAlarm
    properties:
      AlarmActions:  [{ get_resource: restart_policy}]
      AlarmDescription: Restart the Server if it goes down
      ComparisonOperator: LessThanThreshold
      EvaluationPeriods: '1'
      MetricName: Heartbeat
      Namespace: system/linux
      Period: '60'
      Statistic: SampleCount
      Threshold: '1'

outputs:
  RestartURL:
    description: URL to trigger instance restart
    value: { get_attr : [ restart_policy, AlarmUrl ] }