Ask Your Question

JeffBannister's profile - activity

2019-10-29 10:39:05 -0600 received badge  Famous Question (source)
2019-10-29 10:39:05 -0600 received badge  Notable Question (source)
2019-10-28 09:04:14 -0600 received badge  Famous Question (source)
2019-10-21 21:46:51 -0600 received badge  Famous Question (source)
2019-05-02 02:14:47 -0600 received badge  Famous Question (source)
2019-04-28 05:38:39 -0600 received badge  Popular Question (source)
2019-04-28 05:38:34 -0600 received badge  Notable Question (source)
2019-04-26 03:05:47 -0600 commented answer Openstack Rocky keypair issues

Fixed! The metadata_agent.ini must now refer to the metadata server by nova_metadata_host=IP addr or name rather than nova_metadata_ip (since Queens): github

2019-04-26 01:36:02 -0600 commented answer Openstack Rocky keypair issues

Ahh, my network node is going to 127.0.0.1 for the meta-data even though metadata_agent.ini says nova_metadata_ip=10.0.1.11. I can see the error in the metadata log file that says it's trying 127.0.0.1 port 8775. But I don't know why it's doing that.

2019-04-26 01:13:55 -0600 commented answer Openstack Rocky keypair issues

So I've looked at an older one that works (Ocata) and I can see the network node issuing the GET to the controller, but my Rocky doesn't

2019-04-26 00:26:03 -0600 answered a question Openstack Rocky keypair issues

Checked the router and the NAT table is fine - same as yours. My setup is that I have 3 nodes: controller, compute & network. I did a Wireshark trace on the network node as follows:

  • capture on a mirror port on br-int, capture on interface between network (10.0.1.31) & controller (10.0.1.11)
  • on the instance (192.168.101.108) I did curl http://169.254.169.254/ which gives a 500 Internal Error

I can't attach the Wireshark trace file but I've attached a screen grab of the output (image). After the HTTP GET to 169, there's a set of AMQP messages between the network node and the controller which I'm assuming is for the meta-data - they don't seem to be in error. But then 169 returns a 500 error.

2019-04-25 10:35:02 -0600 received badge  Popular Question (source)
2019-04-25 03:08:04 -0600 commented question Openstack Rocky keypair issues

I also tried this and it didn't change the situation: RedHat

2019-04-25 02:24:21 -0600 received badge  Commentator
2019-04-25 02:24:21 -0600 commented question Openstack Rocky keypair issues

Yes sorry, I realised this after some googling. I'm still getting the fail. Any suggestions how/where to check the NAT mapping?

2019-04-25 00:09:03 -0600 commented question Openstack Rocky keypair issues

I'm tracing between the VM/Instance and the meta-data server and I can see that the instance sends:

GET /2009-04-04/meta-data/instance-id HTTP/1.1\r\n

But it's going to 169.254.169.254!

2019-04-24 22:25:30 -0600 commented question Openstack Rocky keypair issues

Will try to troubleshoot metadata agent Thanks

2019-04-24 22:24:59 -0600 commented question Openstack Rocky keypair issues

Yes it does get an IP address

$ip route
default via 192.168.101.1 dev eth0 
169.254.169.254 via 192.168.101.1 dev eth0 
192.168.101.0/24 dev eth0  src 192.168.101.106 
$ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=120 time=34.511 ms
2019-04-24 20:19:04 -0600 commented question Openstack Rocky keypair issues

There's also a fail on the route:

WARN: failed: route add -net "0.0.0.0/0" gw "192.168.101.1"
cirros-ds 'net' up at 13.08
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 13.49. request failed
failed 2/20: up 16.37. request failed

The instance is online

2019-04-24 20:16:54 -0600 commented question Openstack Rocky keypair issues

Thanks Bernd, I looked at the logs for cirros and for an ubuntu instance I created. There are errors around sshd - this is for cirros:

Starting dropbear sshd: failed to get instance-id of datasource
WARN: generating key of type ecdsa failed!
2019-04-24 03:53:41 -0600 commented question Openstack Rocky keypair issues

I've noticed that in the cirros instance created in Rocky, there is no .ssh directory in the instance so no authorized_keys file. There is in Ocata

2019-04-24 03:43:14 -0600 asked a question Openstack Rocky keypair issues

I have installed Openstack Rocky using Packstack. I cannot connect to instances using keypairs - the instance refuse when connecting with the public key. I've a similar setup with Ocata and it works fine. I've tried with a number of ssh clients and the problem is the same. Could someone provide some guidance as to where I should look to solve this? I've tried verbose on ssh in Linux and looking at Wireshark but can't really see anything that would suggest what's going wrong. TIA

2019-04-17 11:55:32 -0600 received badge  Student (source)
2019-04-16 02:40:58 -0600 received badge  Notable Question (source)
2019-04-16 02:40:58 -0600 received badge  Popular Question (source)
2019-04-05 01:21:17 -0600 received badge  Popular Question (source)
2019-04-03 20:12:12 -0600 asked a question Packstack Sample Answerfile

Anyone got a sample answerfile for Packstack (Rocky) that connects successfully to an external "real" network? I'd appreciate if you could share with me. Thanks.

2019-03-28 23:10:20 -0600 asked a question Public Network Cannot Connect Externally

I've installed Packstack Rocky, I have my bridge up and running bridged to my external network via eth1, however I can't communicate externally from any instances. If I create a router on the external network, I can't ping it from outside either. When I do a 'ip netns ls' it shows the qdhcp agent(s) but there are no qrouters shown even though they are there in the GUI network topology. Any ideas what I can check?

2019-03-28 21:48:09 -0600 received badge  Popular Question (source)
2019-03-27 05:19:40 -0600 asked a question Routers missing in namespace

Hi, I've just installed Rocky using Packstack and I can't get my VMs to connect to the Internet (it worked in my previous Ocata). When I create a network and router & check ip netns ls, I can see the DHCP servers but there are no routers shown. Any ideas? I've shown my networks, routers and the output for the namespace

[root@controller ~(keystone_admin)]# openstack router list
+--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+
| ID                                   | Name        | Status | State | Distributed | HA    | Project                          |
+--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+
| 845c4f5b-9d86-43ef-91e7-ca158d379d95 | test_router | ACTIVE | UP    | False       | False | 58529c2c14b04bb5bf18f8b6e5453585 |
| cbcf916f-e9d0-4475-ac97-f22a73d03576 | router1     | ACTIVE | UP    | False       | False | 812fa5f0ea624d0f8991a4c724e1d87e |
+--------------------------------------+-------------+--------+-------+-------------+-------+----------------------------------+
[root@controller ~(keystone_admin)]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 3d70244a-8ecd-48ad-b322-c012bdc0de6b | test     | 4bbdc8cb-1a15-4a34-93c7-2948d08244d2 |
| 5bf73bb7-421b-4ccb-a772-0c5dbaf58512 | private  | a011b506-3295-41f7-b527-57472100cfa2 |
| b3aad1a8-7d7c-42f6-adac-c6c0ede491ed | public   | 46b11724-8f26-4e91-b807-2b931a92b1be |
+--------------------------------------+----------+--------------------------------------+
   [root@network ~]# ip netns ls
    qdhcp-3d70244a-8ecd-48ad-b322-c012bdc0de6b (id: 2)
    qdhcp-b95479b9-72fc-49e5-84dc-047674daa6d3 (id: 1)
    qdhcp-5bf73bb7-421b-4ccb-a772-0c5dbaf58512 (id: 0)
2019-03-13 19:26:29 -0600 received badge  Teacher (source)
2019-03-13 19:26:29 -0600 received badge  Self-Learner (source)
2019-03-13 05:05:50 -0600 answered a question Autoscaling not working

I solved my problem - I discovered that I needed the following line in my server properties:

metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
2019-03-13 05:04:11 -0600 asked a question Openstack Gnocchi Metric Granularity

Hi All, How do I change the granularity of my metrics in Gnocchi? I've an alarm set on cpu_util but I need to set it at 300 to match Gnocchi. If I wanted to set it at 60, where do I edit in the configs? Thanks, Jeff

2019-03-05 00:45:24 -0600 received badge  Famous Question (source)
2019-03-04 05:49:43 -0600 received badge  Notable Question (source)
2019-03-03 06:47:32 -0600 received badge  Popular Question (source)
2019-03-02 01:17:20 -0600 asked a question Autoscaling not working

Hi All, I’m trying a very simple autoscaling exercise with my Openstack (Ocata) using Aodh for alarms. My cluster has 3 nodes (controller, compute and network) and was set up using Packstack on Centos7. I’ve tried multiple combinations and permutations based on advice online and regardless my alarm evaluation always comes out as “insufficient data” even though I can see the alarm being collected. I am not sure what log file would show me the problem as the Aodh & Gnocchi logs look fine. I’ve included my simple stack & shown some outputs below. I’m hoping it is something very simple I’m missing but I’ve been struggling with this problem for a while. Thanks in advance for your help. Warm regards, Jeff Bannister

heat_template_version: 2016-04-08
description: A simple auto scaling group
resources:
  group:
    type: OS::Heat::AutoScalingGroup 
    properties:
      cooldown: 300 
      desired_capacity: 2 
      max_size: 5 
      min_size: 1 
      resource:
        type: OS::Nova::Server 
        properties: 
          image: 'cirros' 
          flavor: m1.tiny
          networks:
            - network: orbitage

  scaleup_policy: 
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity 
      auto_scaling_group_id: {get_resource: group} 
      cooldown: 300 
      scaling_adjustment: 1 

  cpu_alarm_high: 
    type: OS::Aodh::GnocchiAggregationByResourcesAlarm
    properties:
        metric: cpu_util  
        aggregation_method: mean        
        granularity: 300           
        evaluation_periods: 1 
        threshold: 50  
        resource_type: instance
        comparison_operator: gt      
        alarm_actions: 
            - str_replace:
                template: trust+url
                params:
                    url: {get_attr: [scaleup_policy, signal_url]}      
        query:
            str_replace:
                template: '{"=": {"server_group": "stack_id"}}'
                params:
                    stack_id: {get_param: "OS::stack_id"}
outputs:
    scaleup_policy_signal_url:
        value: {get_attr: [scaleup_policy, signal_url]}

Outputs:

[root@controller ~(keystone_gnocchi)]# openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| e9e1d007-6a25-442b-b614-abdacbcba0f7 | scaler8    | a0a75e4b57dc47ef8ce0105a8db1b434 | CREATE_COMPLETE | 2019-02-25T02:54:17Z | None         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
[root@controller ~(keystone_gnocchi)]# openstack server list
+--------------------------------------+------------------------------------------------------+--------+--------------------------+--------+---------+
| ID                                   | Name                                                 | Status | Networks                 | Image  | Flavor  |
+--------------------------------------+------------------------------------------------------+--------+--------------------------+--------+---------+
| 16808f88-92db-488b-baa2-3d4c93cf297e | scaler8-group-wtcwtetsglid-wklxhd6xrzae-ejmaiybxdosd | ACTIVE | orbitage=192.168.101.113 | cirros | m1.tiny |
| 3fe4e62c-ad72-4928-b9c7-a41fc4074a9d | scaler8-group-wtcwtetsglid-plgwojxq2f5i-xvh3oa7yefvc | ACTIVE | orbitage=192.168.101.118 | cirros | m1.tiny |
+--------------------------------------+------------------------------------------------------+--------+--------------------------+--------+---------+
[root@controller ~(keystone_gnocchi)]# openstack alarm list
+--------------------------------------+--------------------------------------------+-------------------------------------+-------------------+----------+---------+
| alarm_id                             | type                                       | name                                | state             | severity | enabled |
+--------------------------------------+--------------------------------------------+-------------------------------------+-------------------+----------+---------+
| 59a0bbff-8960-4df0-8b25-f10f4fa42ff1 | gnocchi_aggregation_by_resources_threshold | scaler8-cpu_alarm_high-okczotx6ck7h | insufficient data | low      | True    |
+--------------------------------------+--------------------------------------------+-------------------------------------+-------------------+----------+---------+
[root@controller ~(keystone_gnocchi)]# gnocchi measures show --resource-id 16808f88-92db-488b-baa2-3d4c93cf297e cpu_util
+---------------------------+-------------+---------------+
| timestamp                 | granularity |        value  |
+---------------------------+-------------+---------------+
| 2019-02-24T22:00:00-05:00 |       300.0 |  0.0667749275 |
| 2019-02-24T22:05:00-05:00 |       300.0 |  0.0699992097 |
| 2019-02-24T22:10:00-05:00 |       300.0 |  0.0633377277 |
| 2019-02-24T22:15:00-05:00 |       300.0 |  0.063312082  |
| 2019-02-24T22:20:00-05:00 |       300.0 |  0.0633542018 |
| 2019-02-24T22:25:00-05:00 |       300.0 | 99.1123780114 |
+---------------------------+-------------+---------------+
[root@controller ~(keystone_gnocchi)]# openstack alarm show scaler8-cpu_alarm_high-okczotx6ck7h
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                              |
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| aggregation_method        | mean                                                                                                                                               |
| alarm_actions             | [u'http://10.0.1.11:8004/v1/a0a75e4b57dc47ef8ce0105a8db1b434/stacks/scaler8/e9e1d007-6a25-442b-b614-abdacbcba0f7/resources/scaleup_policy/signal'] |
| alarm_id                  | 59a0bbff-8960-4df0-8b25-f10f4fa42ff1                                                                                                               |
| comparison_operator       | gt                                                                                                                                                 |
| description               | gnocchi_aggregation_by_resources_threshold alarm rule                                                                                              |
| enabled                   | True                                                                                                                                               |
| evaluation_periods        | 1                                                                                                                                                  |
| granularity               | 300                                                                                                                                                |
| insufficient_data_actions | []                                                                                                                                                 |
| metric                    | cpu_util                                                                                                                                           |
| name                      | scaler8-cpu_alarm_high-okczotx6ck7h                                                                                                                |
| ok_actions                | []                                                                                                                                                 |
| project_id                | a0a75e4b57dc47ef8ce0105a8db1b434                                                                                                                   |
| query                     | {"=": {"server_group": "e9e1d007-6a25-442b-b614-abdacbcba0f7"}}                                                                                    |
| repeat_actions            | True                                                                                                                                               |
| resource_type             | instance                                                                                                                                           |
| severity                  | low                                                                                                                                                |
| state                     | insufficient data                                                                                                                                  |
| state_reason              | Not evaluated yet                                                                                                                                  |
| state_timestamp           | 2019-02-25T02:54:40.271809                                                                                                                         |
| threshold                 | 50.0                                                                                                                                               |
| time_constraints          | []                                                                                                                                                 |
| timestamp                 | 2019-02-25T02:54:40.271809                                                                                                                         |
| type                      | gnocchi_aggregation_by_resources_threshold                                                                                                         |
| user_id                   | c4160a661d854645952d5024aa1ec881                                                                                                                   |
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
2019-02-24 21:00:46 -0600 commented question Heat autoscaling is triggering alarms but new instance is not created.

Hi - Did you ever get a solution to this? I have the same issue.

2019-02-21 17:49:44 -0600 received badge  Notable Question (source)
2019-02-21 02:52:25 -0600 commented answer Ocata Aodh alarm cpu_util "insufficient data"

Thanks for the reply. I redid it, and yes the granularity was different so I have now set it to be the same, however, it still is showing "Insufficient data"

2019-02-21 02:49:42 -0600 received badge  Popular Question (source)
2019-02-19 04:09:57 -0600 asked a question Ocata Aodh alarm cpu_util "insufficient data"

Hi all, I'm creating an alarm to be triggered by an autoscaling policy based on cpu_util above 50% however the alarm always shows "insufficient data". The stack creates ok and I can access the instances but it never scales - here's my stack:

heat_template_version: 2016-04-08
description: A simple auto scaling group
resources:
  group:
    type: OS::Heat::AutoScalingGroup
    properties:
      cooldown: 60
      desired_capacity: 2
      max_size: 5
      min_size: 1
      resource:
        type: OS::Nova::Server
        properties:
          image: 'cirros'
          flavor: m1.tiny
          networks:
            - network: private

  scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: group}
      cooldown: 60
      scaling_adjustment: 1

  cpu_alarm_high:
    type: OS::Aodh::GnocchiAggregationByResourcesAlarm
    properties:
        metric: cpu_util
        aggregation_method: mean
        granularity: 60
        evaluation_periods: 1
        threshold: 50
        resource_type: instance
        comparison_operator: gt
        alarm_actions:
            - str_replace:
                template: trust+url
                params:
                    url: {get_attr: [scaleup_policy, signal_url]}
        query:
            str_replace:
                template: '{"=": {"server_group": "stack_id"}}'
                params:
                    stack_id: {get_param: "OS::stack_id"}
outputs:
    scaleup_policy_signal_url:
        value: {get_attr: [scaleup_policy, signal_url]}