2016-11-16 08:49:30 -0600
| received badge | ● Notable Question
(source)
|
2016-11-16 08:49:30 -0600
| received badge | ● Popular Question
(source)
|
2016-07-07 14:54:36 -0600
| received badge | ● Famous Question
(source)
|
2015-10-07 04:39:16 -0600
| received badge | ● Taxonomist
|
2015-04-07 22:23:52 -0600
| received badge | ● Student
(source)
|
2015-01-23 01:33:36 -0600
| received badge | ● Famous Question
(source)
|
2015-01-02 16:40:13 -0600
| marked best answer | Wrong ScalingPolicy alarm_url I wrote a minimal example for a triggered ScalingGroup, but with the alarm_url is something wrong. heat_template_version: 2013-05-23
parameters:
key:
type: string
default: demo-key
flavor:
type: string
default: m1.small
image:
type: string
default: cirros-0.3.2-x86_64
network:
type: string
default: int-net1
resources:
myServerGroup:
type: OS::Heat::AutoScalingGroup
properties:
max_size: 5
desired_capacity: 1
min_size: 1
resource:
type: OS::Nova::Server
properties:
key_name: {get_param: key}
image: {get_param: image}
flavor: {get_param: flavor}
networks: [ {network: {get_param: network}} ]
scale_UP_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: myServerGroup }
cooldown: 10
scaling_adjustment: 1
outputs:
UPscalingURL:
value: {get_attr: [scale_UP_policy, alarm_url]}
The output is:
http://controller:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3Aad948ee0c33749508d76b95f33e7e02d%3Astacks%2Ftest%2F9ec989d2-ce9d-4d73-be6c-06ba7702b60a%2Fresources%2Fscale_UP_policy?Timestamp=2014-09-22T09%3A27%3A52Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=ec9371233baf46bd95b3150ff00f79d4&SignatureVersion=2&Signature=p0zVObaO5wMpniREfOrZs4jfFrCH9UIvfcCmHqG35vM%3D (http://controller:8000/v1/signal/arn%...) I see that there is html-code in it ( %3A = ":" %2F = "/" ), but how can i fix this? And even when i change all the HTML code to the correct symbols the Link doesn't work. |
2015-01-02 16:40:01 -0600
| marked best answer | Floating Loadbalancing IP So this morning I finished my Loadbalancing Test Template. Everything went well and the stack was created completely, but unfortunately something does quite not work as expected. 1) I added a floating IP resource for my pool, but the floating IP-Adresses weren't created. pool:
type: OS::Neutron::Pool
properties:
protocol: HTTP
monitors: [{get_resource: monitor}]
subnet_id: {get_param: subnet}
lb_method: ROUND_ROBIN
vip:
protocol_port: 80
lb:
type: OS::Neutron::LoadBalancer
properties:
protocol_port: 80
pool_id: {get_resource: pool}
lb_floating:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: {get_param: public}
port_id: {get_attr: [pool, vip, port_id]}
public = the ID of my external Network 10.0.x.x
and the subnet = the Subnet ID of my internal Network 196.0.x.x All my instances were created in the internal Network, but nothing has a external IP. Do I have to create an extra port-resource for the pool? Update Output of resource-show for the floating ip address: +------------------------+-----------------------------------------...+
| Property | Value ...|
+------------------------+-----------------------------------------...+
| description | ...|
| links | http://controller:8004/v1/ad948ee0c33749...|
| | http://controller:8004/v1/ad948ee0c33749...|
| logical_resource_id | lb_floating ...|
| physical_resource_id | a3fd80f3-ec10-48fa-9d70-1c9f72bd0dcd ...|
| required_by | ...|
| resource_name | lb_floating ...|
| resource_status | CREATE_COMPLETE ...|
| resource_status_reason | state changed ...|
| resource_type | OS::Neutron::FloatingIP ...|
+------------------------+-----------------------------------------...+
Output of neutron floatingip-list +--------------------------------------+-----------------+-------------------+-----------+
| id | fixed_ip_address| floating_ip_address | port_id |
+--------------------------------------+-----------------+-------------------+-----------+
| 0f4640f6-a307-4d81-8302-d244c26812ff | | 10.0.100.16| |
| 19659c84-0362-4a9c-bd22-5790257d4c82 | | 10.0.100.21| |
| 26d19489-2760-42bb-adfc-d906cf6f7eb1 | | 10.0.100.76| |
| 31749b21-6ce6-4f16-ae73-21382d0a5f69 | | 10.0.100.18| |
| 41bb0da2-f58d-4cb2-8852-94c3d42da255 | | 10.0.100.20| |
| 7c802527-7d26-42bd-ad65-b62a83934876 | | 10.0.100.77| |
| a3fd80f3-ec10-48fa-9d70-1c9f72bd0dcd | 192.168.1.85 | 10.0.100.89| 1d287d0c-3e7a-4afa- |
| | | | b58a-e80fd61118ed |
| ac101ef7-447f-4db2-bfd4-23e428dd6b01 | | 10.0.100.14| |
| aed67890-321f-4dab-a31b-1517363c4b20 | | 10.0.100.22| |
| be562ec7-6a60-4a80-8ee5-93fa3efbba48 | | 10.0.100.19| |
| c2754b98-3781-4d88-a504-d841cb212efb | | 10.0.100.15| |
| d6c6a48f-3040-48a7-8bfc-56b2721b6efd | | 10.0.100.17| |
| f040c26e-91b9-451c-97e7-07f8818deba8 | | 10.0.100.12| |
| fd3b847e-76f0-4a8a-bbe0-287d70e3c4fb | | 10.0.100.13| |
+--------------------------------------+--------------+------------+---------------------+
|
2015-01-02 16:39:14 -0600
| received badge | ● Notable Question
(source)
|
2015-01-02 16:39:14 -0600
| received badge | ● Famous Question
(source)
|
2015-01-01 10:04:53 -0600
| received badge | ● Famous Question
(source)
|
2014-11-25 16:27:11 -0600
| received badge | ● Famous Question
(source)
|
2014-11-14 10:28:01 -0600
| received badge | ● Notable Question
(source)
|
2014-11-11 08:21:17 -0600
| received badge | ● Notable Question
(source)
|
2014-10-21 07:52:27 -0600
| received badge | ● Famous Question
(source)
|
2014-10-19 19:32:52 -0600
| received badge | ● Popular Question
(source)
|
2014-10-13 02:31:44 -0600
| commented answer | Alarm Trigger Delay Unfortunately this doesn't seem to solve the problem :/
I changed the interval in the /etc/ceilometer/pipeline.yaml and restartet all ceilometer-services but it's still up to 10 minutes till the alarm triggers. I updated the question with the alarm-history |
2014-10-13 00:59:38 -0600
| received badge | ● Teacher
(source)
|
2014-10-13 00:59:38 -0600
| received badge | ● Self-Learner
(source)
|
2014-10-09 02:18:21 -0600
| received badge | ● Notable Question
(source)
|
2014-10-08 09:11:08 -0600
| received badge | ● Popular Question
(source)
|
2014-10-08 02:55:00 -0600
| commented question | no instance data found in start-local I have reset my vm-controller out of a snapshot, but i haven't changed any passwords manually.
How can i controll that the passwords are still correct?
commands like nova keypair-list are working. |
2014-10-07 07:21:34 -0600
| asked a question | no instance data found in start-local My OpenStack worked completely fine for several days, but suddenly I can't create instance completely.
I'm watching 3 logs (nova-api.log , keystone.log & neutron.log ) at the time i create the instance and nothing appears. I got no clue what's wrong and just the error message from the instance console at the boot. http://postimg.org/image/wx8wy0hu5/ Something that I noticed is that the token ttl is wrong. when I get a token via keystone token-get it's already expired 1 hour ago. I changed the expire time to 86400 seconds (24hours) so now the token expires in 22hours! so keystone uses a wrong time, but thats not reason for this error isn't it? After 24min in the keystone.log WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for "application/json" only and may be removed in K.
New line in nova.log when I start another instance. WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token
After one day off i got some messages in the logs when i try to create a instance via nova boot nova-api.log WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token
keystone.log WARNING keystone.common.wsgi [-] Authorization failed. The request you have made requires authentication. from 10.0.9.10
neutron.log WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token
10.0.9.10 is my controller. So i see that i can't get authenticate, I just don't know why. Could there be something wrong with my openstack.rc file? think i found something in nova-scheduler.log WARNING nova.scheduler.host_manager [req-02dd9235-49e0-47d3-96e5-d53f31982d1c a9ed4eb9161244bdb9306f589f301e08 ad948ee0c33749508d76b95f33e7e02d] Host has more disk space than database expected (4gb > -33gb)
my dashboard says under hypervisions that 40GB from 14GB is used. All 40 are on the compute2 node. How can this be possible when the node only has 7GB of storage? Is something wrong with the database? |
2014-10-02 15:10:09 -0600
| received badge | ● Famous Question
(source)
|
2014-10-02 15:10:04 -0600
| received badge | ● Notable Question
(source)
|
2014-10-02 15:03:28 -0600
| received badge | ● Notable Question
(source)
|
2014-10-02 08:07:20 -0600
| marked best answer | Alarm Trigger Delay I have a complete loadbalanced ScalingGroup, but the time it takes to create or delete a new instance takes 10 minutes. Policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: myServer_Group}
cooldown: 30
scaling_adjustment: 1
Alarm:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 30
evaluation_periods: 1
threshold: 50
alarm_actions:
- {get_attr: [Policy, alarm_url]}
comparison_operator: gt
So I set the Policy Cooldown to 30seconds and the period of the alarm also to 30seconds. I keep my Instance at 90% CPU so that the alarm should trigger within the first (or second) Period. Why does it stil take 10 minutes to trigger? I hope that someone can explain me this events and thanks to everyone who help me with my Questions. ceilometer alarm-history -a 5507767f-1b31-461d-a6c1-9a855ebbdbb0
+------------------+----------------------------+--------------------------------------------+
| Type | Timestamp | Detail |
+------------------+----------------------------+--------------------------------------------+
| creation | 2014-10-02T12:53:02.986000 | name: Scal-cpu_alarm_HIGH-zjhgpr3ct2qy |
| | | type: threshold |
| | | rule: cpu_util > 50.0 during 1 x 60s |
| state transition | 2014-10-02T13:03:35.944000 | state: alarm |
| state transition | 2014-10-02T13:05:35.942000 | state: insufficient data |
| state transition | 2014-10-02T13:13:36.008000 | state: alarm |
| state transition | 2014-10-02T13:15:36.004000 | state: insufficient data |
+------------------+----------------------------+--------------------------------------------+
|
2014-10-02 08:07:19 -0600
| commented answer | Alarm Trigger Delay Unfortunately this doesn't seem to solve the problem :/
I changed the interval in the /etc/ceilometer/pipeline.yaml and restartet all ceilometer-services but it's still up to 10 minutes till the alarm triggers. I updated the question with the alarm-history |
2014-10-01 08:38:02 -0600
| commented answer | heat template alarm Thank you for the answer and the additional information. |
2014-10-01 06:56:35 -0600
| asked a question | heat template alarm so i wrote a Heat Template with a simple alarm that triggers my upscaling Policy when the CPU is above 75% for 60s. It works fine, but i never thought about what CPU the alarm monitors. Is it the CPU of the Instance thats upscaled? (I don't think so because i haven't define a instance in the alarm) or is it the CPU of all Resources in the Stack? so when my Main Server is fully occupied but the other Resources are fine there will be no new instance? CPU_Alarm_HIGH:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 75
alarm_actions:
- {get_attr: [Scale_UP_Policy, alarm_url]}
comparison_operator: gt
|
2014-09-30 02:22:25 -0600
| received badge | ● Enthusiast
|