发起问题

frippe75 的档案 - activity

2020-04-17 06:23:33 -0500 获得奖牌  老师 (source)
2020-02-06 21:22:53 -0500 获得奖牌  热门的问题 (source)
2020-02-06 21:22:53 -0500 获得奖牌  受欢迎的问题 (source)
2020-02-02 03:57:51 -0500 回答问题 instances in provider network cannot reach metadata agent

I'm getting the metadata-agent to work following https://docs.openstack.org/ocata/networking-guide/deploy-ovs-provider.html (https://docs.openstack.org/ocata/netw...)

[   11.307113] cloud-init[834]: Cloud-init v. 0.7.9 running 'init' at Sun, 02 Feb 2020 09:22:14 +0000. Up 11.27 seconds.
[   11.368959] cloud-init[834]: ci-info: +++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++
[   11.375669] cloud-init[834]: ci-info: +--------+------+--------------+-------------+-------+-------------------+
[   11.383321] cloud-init[834]: ci-info: | Device |  Up  |   Address    |     Mask    | Scope |     Hw-Address    |
[   11.390863] cloud-init[834]: ci-info: +--------+------+--------------+-------------+-------+-------------------+
[   11.397559] cloud-init[834]: ci-info: |  lo:   | True |  127.0.0.1   |  255.0.0.0  |   .   |         .         |
[   11.406569] cloud-init[834]: ci-info: |  lo:   | True |      .       |      .      |   d   |         .         |
[   11.413297] cloud-init[834]: ci-info: | eth0:  | True | 10.140.0.220 | 255.255.0.0 |   .   | fa:16:3e:cb:56:88 |
[   11.420840] cloud-init[834]: ci-info: | eth0:  | True |      .       |      .      |   d   | fa:16:3e:cb:56:88 |
[   11.428770] cloud-init[834]: ci-info: +--------+------+--------------+-------------+-------+-------------------+
[   11.436676] cloud-init[834]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
[   11.444798] cloud-init[834]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
[   11.453845] cloud-init[834]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
[   11.460579] cloud-init[834]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
[   11.467297] cloud-init[834]: ci-info: |   0   |     0.0.0.0     |  10.140.0.1  |     0.0.0.0     |    eth0   |   UG  |
[   11.473746] cloud-init[834]: ci-info: |   1   |    10.140.0.0   |   0.0.0.0    |   255.255.0.0   |    eth0   |   U   |
[   11.480763] cloud-init[834]: ci-info: |   2   | 169.254.169.254 | 10.140.0.200 | 255.255.255.255 |    eth0   |  UGH  |
[   11.488171] cloud-init[834]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+

But. I cannot get traffic to flow...This is my provider network.

[root@os01-ctr01 ~(keystone_admin)]# openstack network show e23de27f-859b-4cb6-9220-7003e5d80ec0
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | UP                                   |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2020-02-01T10:45:28Z                 |
| description               |                                      |
| dns_domain                | None                                 |
| id                        | e23de27f-859b-4cb6-9220-7003e5d80ec0 |
| ipv4_address_scope        | None                                 |
| ipv6_address_scope        | None                                 |
| is_default                | None                                 |
| is_vlan_transparent       | None                                 |
| mtu                       | 1500                                 |
| name                      | vlan_140_hpecp                       |
| port_security_enabled     | True                                 |
| project_id                | 9e424d09b1334ce18b3cf8e84329dd1e     |
| provider:network_type     | vlan                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  | 140                                  |
| router:external           | Internal                             |
| segments                  | None                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 848edac1-e859-433d-aa0b-a9f7df805fcd |
+---------------------------+--------------------------------------+

Could it be the router:externl = Internal ??

I have created the br-provider ovs switch, setup propper mappings of provider=br-provider:140:150 allowing vlan id 14-150. Added bond0 as the port for br-provider... Done tcpdump to make sure I see VLAN 140 on bond0...

ovs-vsctl show does not show any config around vlan's ... Not sure where that would be done... Could I see it on the qrouter??

Any assistance here would be really appreciated!

2020-01-30 07:43:16 -0500 回答问题 Change hostname (dns suffix) on a controller host

I guess start cleaning out the following...

[root@openstack01 neutron(keystone_admin)]# openstack network agent list
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type           | Host                   | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+---------------------------+
| 084d903e-5351-44fc-9d07-0d48841c3364 | Metering agent       | openstack01.os.lab.net | None              | :-)   | UP    | neutron-metering-agent    |
| 188b41aa-ca76-402a-986d-627b11cbc7e3 | DHCP agent           | openstack01.lab.net    | nova              | XXX   | UP    | neutron-dhcp-agent        |
| 937da23e-607f-4603-bec9-2e5ad50df102 | Metadata agent       | openstack01.lab.net    | None              | XXX   | UP    | neutron-metadata-agent    |
| a860ab21-79b9-4224-a566-141e6ac33c62 | Loadbalancerv2 agent | openstack01.lab.net    | None              | XXX   | UP    | neutron-lbaasv2-agent     |
| b0575a84-f5d2-48c3-abba-6491b916aa67 | Metering agent       | openstack01.lab.net    | None              | XXX   | UP    | neutron-metering-agent    |
| c11a0a1d-5ee8-49bf-a63f-c568de98d3b8 | Loadbalancerv2 agent | openstack01.os.lab.net | None              | :-)   | UP    | neutron-lbaasv2-agent     |
| c20fd389-926e-4f0b-9052-c87d87d21b3e | L3 agent             | openstack01.os.lab.net | nova              | :-)   | UP    | neutron-l3-agent          |
| c88f0bba-4ea9-4e4f-9d58-c17065d37573 | Open vSwitch agent   | openstack02.lab.net    | None              | XXX   | UP    | neutron-openvswitch-agent |
| cdb8955d-2ad5-4567-bcac-6d94d348d9fa | Open vSwitch agent   | openstack01.lab.net    | None              | XXX   | UP    | neutron-openvswitch-agent |
| dcb9c9cd-15c0-43ac-91e7-28868718229b | DHCP agent           | openstack01.os.lab.net | nova              | :-)   | UP    | neutron-dhcp-agent        |
| e9a368ca-6ba5-45cb-a2a0-f3c92a3e5eba | L3 agent             | openstack01.lab.net    | nova              | XXX   | UP    | neutron-l3-agent          |
| ff5b1e4e-4989-4960-80da-03cd63fe8e74 | Metadata agent       | openstack01.os.lab.net | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+---------------------------+

Soo cleaned up

[root@openstack01 neutron(keystone_admin)]# openstack network agent list
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type           | Host                   | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+------------------------+
| 084d903e-5351-44fc-9d07-0d48841c3364 | Metering agent       | openstack01.os.lab.net | None              | :-)   | UP    | neutron-metering-agent |
| c11a0a1d-5ee8-49bf-a63f-c568de98d3b8 | Loadbalancerv2 agent | openstack01.os.lab.net | None              | :-)   | UP    | neutron-lbaasv2-agent  |
| c20fd389-926e-4f0b-9052-c87d87d21b3e | L3 agent             | openstack01.os.lab.net | nova              | :-)   | UP    | neutron-l3-agent       |
| dcb9c9cd-15c0-43ac-91e7-28868718229b | DHCP agent           | openstack01.os.lab.net | nova              | :-)   | UP    | neutron-dhcp-agent     |
| ff5b1e4e-4989-4960-80da-03cd63fe8e74 | Metadata agent       | openstack01.os.lab.net | None              | :-)   | UP    | neutron-metadata-agent |
+--------------------------------------+----------------------+------------------------+-------------------+-------+-------+------------------------+

But this could take a lot of time.... Any way to clean out a host entirely?

[root@openstack01 neutron(keystone_admin)]# openstack orchestration service list
+------------------------+-------------+--------------------------------------+------------------------+--------+----------------------------+--------+
| Hostname               | Binary      | Engine ID                            | Host                   | Topic  | Updated At                 | Status |
+------------------------+-------------+--------------------------------------+------------------------+--------+----------------------------+--------+
| openstack01.os.lab.net | heat-engine | 77162c48-0155-4e7c-adfa-d01db4da3935 | openstack01.os.lab.net | engine | 2020-01-30T13:51:16.000000 | up     |
| openstack01.os.lab.net | heat-engine | 13bb28b3-4b37-457b-ae85-48a109ef66c5 | openstack01.os.lab.net | engine | 2020-01-30T13:51:16.000000 | up     |
| openstack01.lab.net    | heat-engine | d27e4553-bfda-4db0-a5ed-1d105674b666 | openstack01.lab.net    | engine | 2020-01-29T09:58:41.000000 | down   |
| openstack01.lab.net    | heat-engine | 40561077-0e12-4e64-a8b0-f5ce87bd3dcc | openstack01.lab.net    | engine | 2020-01-29T09:58:41.000000 | down   |
| openstack01.lab.net    | heat-engine | 141e84fb-39c2-47b6-902d-a5f0e32e6d50 | openstack01.lab.net    | engine | 2020-01-29T09:58:41.000000 | down   |
| openstack01.os.lab.net | heat-engine | aa646cbe-68f0-490c-a21c-66b37856cdd0 | openstack01.os.lab.net | engine | 2020-01-30T13:51:16.000000 | up     |
| openstack01.lab.net    | heat-engine | 866e66aa-ac79-4ee3-a9e1-37056fd357df | openstack01.lab.net    | engine | 2020-01-29T09:58:41.000000 | down   |
| openstack01.os.lab.net | heat-engine | f433822d-00dc-4f07-9c3b-8de2eaeae49e | openstack01.os.lab.net | engine | 2020-01-30T13:51:16.000000 | up     |
+------------------------+-------------+--------------------------------------+------------------------+--------+----------------------------+--------+
2020-01-30 07:39:07 -0500 问了问题 Change hostname (dns suffix) on a controller host

Hi,

Renamed (move to another dns zone) http://openstack01.lab.net -> http://openstack01.os.lab.net

So basically lost connection to alot of services.

[root@openstack01 neutron(keystone_admin)]# openstack compute service list
+----+------------------+------------------------+----------+----------+-------+----------------------------+
| ID | Binary           | Host                   | Zone     | Status   | State | Updated At                 |
+----+------------------+------------------------+----------+----------+-------+----------------------------+
| 13 | nova-compute     | openstack01.lab.net    | nova     | disabled | down  | 2020-01-30T10:07:05.000000 |
| 14 | nova-compute     | openstack02.lab.net    | nova     | disabled | down  | 2020-01-30T10:53:53.000000 |
| 15 | nova-scheduler   | openstack01.os.lab.net | internal | enabled  | up    | 2020-01-30T13:31:20.000000 |
| 17 | nova-consoleauth | openstack01.os.lab.net | internal | enabled  | up    | 2020-01-30T13:31:23.000000 |
| 18 | nova-conductor   | openstack01.os.lab.net | internal | enabled  | up    | 2020-01-30T13:31:27.000000 |
| 19 | nova-compute     | openstack01.os.lab.net | nova     | enabled  | up    | 2020-01-30T13:31:20.000000 |
+----+------------------+------------------------+----------+----------+-------+----------------------------+

Tried to disable and delete the old one after adding the compute part via nova-manage... Cannot delete since old instances are remaining on the disabled. Was hoping to clean away the instances and delete and continue using the new one.

But I guess this will/might lead to other issues since there are neutron agents running as well. Not sure how to get them "back in"... Better to wipe and reinstall this... All config is done via IP addresses but I guess they all get added in by name ...

2018-11-22 23:22:23 -0500 评论问题 No valid host was found. There are not enough hosts available.

Also having this issue. Please report back. Easy to ask questions when u need it. :-)

2018-11-22 23:17:04 -0500 回答问题 Found 0 unmapped computes in cell: 88fdcc1f-ddea-4505-a6b1-394c10708dee

So interesting! I have been spending hours (days soon) trying to resolve this. I deployed the rocky release using packstack on centos 7.5.1804 and an allinone deployment. packstack succeeds but I cannot spawn an instance. Tried to redeploy after setting selinux in permissive mode. same result.

[root@openstack nova(keystone_admin)]# /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'default': d26a5f4e-b59f-4e1b-808f-7caa29f9f5b2
Found 0 unmapped computes in cell: d26a5f4e-b59f-4e1b-808f-7caa29f9f5b2

Multiple ways of finding out the same thing...

[root@openstack nova(keystone_admin)]# nova-status upgrade check
+--------------------------------------------------------------------+
| Upgrade Check Results                                              |
+--------------------------------------------------------------------+
| Check: Cells v2                                                    |
| Result: Success                                                    |
| Details: No host mappings or compute nodes were found. Remember to |
|   run command 'nova-manage cell_v2 discover_hosts' when new        |
|   compute hosts are deployed.                                      |
+--------------------------------------------------------------------+
| Check: Placement API                                               |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Resource Providers                                          |
| Result: Success                                                    |
| Details: There are no compute resource providers in the Placement  |
|   service nor are there compute nodes in the database.             |
|   Remember to configure new compute nodes to report into the       |
|   Placement service. See                                           |
|   https://docs.openstack.org/nova/latest/user/placement.html       |
|   for more details.                                                |
+--------------------------------------------------------------------+

From nova-placement-api.log:

2018-11-22 16:25:02.813 28109 INFO nova.scheduler.host_manager [req-624b4df5-30e7-4eb5-bbff-f7d14f50fb13 - - - - -] Host mapping not found for host openstack.frippe.com. Not tracking instance info for this host.
2018-11-22 16:25:02.813 28109 INFO nova.scheduler.host_manager [req-624b4df5-30e7-4eb5-bbff-f7d14f50fb13 - - - - -] Received a sync request from an unknown host 'openstack.xxx.com'. Re-created its InstanceList.
2018-11-22 17:52:12.870 28109 INFO nova.scheduler.manager [req-d7b6724d-7406-4cf6-a747-f428c49aa302 e763008d9b3e49e2803d39ad1d43696e d365b3a65ba6428f86119b6f84fac7e8 - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.
2018-11-23 04:59:54.091 28108 INFO nova.scheduler.manager [req-1c63094b-0c63-4b26-ad1f-865777293c8e e763008d9b3e49e2803d39ad1d43696e d365b3a65ba6428f86119b6f84fac7e8 - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.

I do get something from service list:

[root@openstack nova(keystone_admin)]# openstack compute service list --service nova-compute
+----+--------------+----------------------+------+---------+-------+----------------------------+
| ID | Binary       | Host                 | Zone | Status  | State | Updated At                 |
+----+--------------+----------------------+------+---------+-------+----------------------------+
| 10 | nova-compute | openstack.xxx.com | nova | enabled | up    | 2018-11-23T04:14:33.000000 |
+----+--------------+----------------------+------+---------+-------+----------------------------+
2017-02-24 05:41:53 -0500 获得奖牌  著名问题 (source)
2017-01-16 15:14:31 -0500 获得奖牌  热门的问题 (source)
2017-01-08 08:15:43 -0500 回答问题 Heat - DELETE_FAILED

Having the same issue but in the form of a stack. But the resource that heat fails to delete is a resource group so might be related. I'm on newton.

> (openstack) stack list
> +--------------------------------------+------------------+-----------------+----------------------+----------------------+ | ID                                  
> | Stack Name       | Stack Status    |
> Creation Time        | Updated Time   
> |
> +--------------------------------------+------------------+-----------------+----------------------+----------------------+ | 1bc74640-265e-4717-a34c-5e5079846ee1
> | mongo_with_lbaas | DELETE_FAILED   |
> 2017-01-08T12:53:55Z |
> 2017-01-08T13:55:24Z | |
> 67534887-221e-4aa0-8528-9b6194efafe7 |
> mongo            | CREATE_COMPLETE |
> 2017-01-01T23:45:50Z | None           
> | |
> 2d966fd7-1668-41d6-ac81-0d0b326635e4 |
> datacenter       | CREATE_COMPLETE |
> 2016-12-29T13:13:48Z | None           
> |
> +--------------------------------------+------------------+-----------------+----------------------+----------------------+

And looking at the resource in more detail it the OS::Heat::ResourceGroup that fails

(openstack) stack resource list 1bc74640-265e-4717-a34c-5e5079846ee1
+---------------+--------------------------------------+----------------------------------+-----------------+----------------------+
| resource_name | physical_resource_id                 | resource_type                    | resource_status | updated_time         |
+---------------+--------------------------------------+----------------------------------+-----------------+----------------------+
| listener      | 2ff954f5-01f4-46c3-b235-1c9786a88512 | OS::Neutron::LBaaS::Listener     | CREATE_COMPLETE | 2017-01-08T12:53:55Z |
| mongo_servers | f722b554-a9b7-41a7-b054-c127a4841d04 | OS::Heat::ResourceGroup          | DELETE_FAILED   | 2017-01-08T12:53:55Z |
| pool          | 137c6df3-7d25-49b3-9ebd-7941d322a2bd | OS::Neutron::LBaaS::Pool         | CREATE_COMPLETE | 2017-01-08T12:53:55Z |
| loadbalancer  | 8ced8df7-2ea9-4ec1-a311-3af11532c8c4 | OS::Neutron::LBaaS::LoadBalancer | CREATE_COMPLETE | 2017-01-08T12:53:56Z |
+---------------+--------------------------------------+----------------------------------+-----------------+----------------------+
2016-12-29 17:02:48 -0500 评论回答 Nested stacks and parameter names and defaults

Realizes after reading the env html a few times that my question is not really related to the orginal. But defining my key_name under parameter_defaults made it available in nested yaml files as well. So for parameters: you really need to keep passing then explicitly.

2016-12-29 16:53:58 -0500 评论回答 Nested stacks and parameter names and defaults

Just found http://docs.openstack.org/developer/heat/template_guide/environment.html

Seems that there is a difference between parameters: section and parameters_defaults: section. And then there is the merge_strategies ...

2016-12-29 15:57:48 -0500 回答问题 Nested stacks and parameter names and defaults

Not sure If I'm out of my depths here. But was that the answer to the question? I'm facing a problem where I think I have the same issue as the original question.

I pass a env file to "openstack stack create ...." The env contains among other parameters the parameter key_name=mykey2.

So my first level yaml file look like:

parameters:
  key_name:
    type: string

resources:
  my_instance:
    type: single_server.yaml
    properties:
      # key_name: {get_param: key_name}   ## Not passing the key_name here

I.e key_name gets passed down to to single_server.yaml

parameters:
  key_name:
    type: string
    default: mykey

resources:
  server:
type: OS::Nova::Server
properties:

    properties:
      key_name: {get_param: key_name}

Error is that mykey is missing. Using a different openstack setup where mykey is not uploaded. I was under the impression parameters had to be passed explicitly down the tree. If this is the case my templates tends to get quite repetetive. Would be nice to have an option to pass all env down with an option.

May a different question all together?

2016-12-27 03:35:02 -0500 评论回答 The allocation pool spans beyond the subnet cidr

Oh, embarrassing, I was dead sure I wrote 10.0.1.0/24 and the range 10.0.1.100-10.0.1.150. Should not work late at night :-).

2016-12-27 03:32:37 -0500 获得奖牌  粉丝
2016-12-26 05:23:01 -0500 获得奖牌  受欢迎的问题 (source)
2016-12-25 05:54:29 -0500 问了问题 The allocation pool spans beyond the subnet cidr

Noticed this "similar" question: https://ask.openstack.org/en/question...

In that case they were outside the span. For me I'm trying to allocate only part of the subnet for dhcp's.

My Error: Resource CREATE failed: BadRequest: resources.subnet_resource: The allocation pool 10.1.0.100-10.1.0.150 spans beyond the subnet cidr 10.0.1.0/24.

No sure why, using a CIDR of 192.168.0.0/24 and 192.168.0.100-192.168.0.150 works ok. Not sure what is happening here.

The online subnet calc is on my side, address range for my CIDR is: 10.0.1.1 - 10.0.1.254 when it comes to usable addresses leaving net and broadcast out.

B.t.w I'm on Newton installed via packstack on a all-in-one-node setup.

Clues?

2015-01-05 03:30:44 -0500 获得奖牌  编辑 (source)
2015-01-05 03:29:22 -0500 回答问题 HTTPConnectionPool(host='192.168.1.12', port=8774): Max retries exceeded with url: /v2/70a04e8259b04717bf1bdd4faaedbc0c/extensions (Caused by <class 'socket.error'>: [Errno 111] Connection refused)

It most likely due to having to little memory, check requirements on devstack dot org

I guess it all boils down to the OOM implementation of Linux. It's Linux handling out-of-memory situations. nova-api process is being killed. See console or grep messages for "killed process"

The behaviour can be altered. Read up by googling "OOM killer"

2015-01-05 03:13:59 -0500 评论回答 devstack on digitalocean error HTTPPool

I could have skiped the inital disussion but I thought that was related. work in progress... and simply started at the bold text. Sorry!

2015-01-05 03:10:20 -0500 评论回答 devstack on digitalocean error HTTPPool

The SOLUTION. The problem with this thread is that the poster only inlcuded output from the exit cleanup code so it is really difficult to determine what problem he/she is facing. But when I saw "++ available_flavors=" it was pretty similar to my problem. Nova-api process is probably not running.

2015-01-04 19:18:21 -0500 回答问题 devstack on digitalocean error HTTPPool

Having a similar issue as far as I can determine. One thing that pops up in my head is just how stable the stable/juno release really is.

Looking in the stackrc file alot of the git clones are from other projects master branches. Like oslo for example. They have also a stable/juno branch. I'm in the process of altering those for openstack related projects and have another go at it... So I'm sneaking that question in to any one with a better understanding on this subject :-)

GITBRANCH["oslo.concurrency"]=${OSLOCON_BRANCH:-master} -> GITBRANCH["oslo.concurrency"]=${OSLOCON_BRANCH:-stable/juno}

When I used the master branch of devstack nova-api did not even start. I myself running in VMware vm using Ubuntu 14.04LTS

.... another hour past .... No, this did not help. Tried to run the nova flavor-list using --debug

But I just discovered I'm running out of memory since the OOM killer has killed the nova-api process Will increase memory from 1GB -> 2GB.

From http://devstack.org under vm requirement... DevStack should run in any virtual machine running a supported Linux release. It will perform best with 2Gb or more of RAM. :-)

That was it for me! switched to the master branch as well without any issue... BUT I was monitoring my system during the devstack install process... Quite memory consuming stuff... The system stabalized around 1.97GB. 8 nova-api processed consuming between 70-110 MB per process. Maybe the number could be cut down?

Will fiddle around a little more. Cheers!