发起问题

theque42 的档案 - activity

2020-04-18 04:10:21 -0500 获得奖牌  著名问题 (source)
2020-04-18 04:10:21 -0500 获得奖牌  受欢迎的问题 (source)
2020-04-18 04:10:21 -0500 获得奖牌  热门的问题 (source)
2020-04-16 06:06:25 -0500 获得奖牌  受欢迎的问题 (source)
2020-04-16 06:06:25 -0500 获得奖牌  著名问题 (source)
2020-04-16 06:06:25 -0500 获得奖牌  热门的问题 (source)
2020-02-12 23:00:48 -0500 获得奖牌  著名问题 (source)
2020-01-15 04:25:27 -0500 评论问题 Juggling CentOS 7 with yum/python2 and kolla-ansible install with python3

Further attempts have run into the fact that kolla-ansible tries to force-install python2 via yum. Which if course fails if you've switched to dnf with python3

2020-01-14 04:14:21 -0500 获得奖牌  热门的问题 (source)
2020-01-14 04:14:21 -0500 获得奖牌  受欢迎的问题 (source)
2020-01-13 14:52:46 -0500 问了问题 Juggling CentOS 7 with yum/python2 and kolla-ansible install with python3

I've got a setup for running OpenStack courses where I am using kolla-ansible on CentOS 7.

I am trying to figure out if there is a well-known way of running the install and management with only python3.

Currently I install with all nodes (each cloud has a number of centos VMs) configured with their "normal" python2, run kolla-ansible deploy, and then using update-alternatives I switch to python3 before I install all the openstack-CLI's.

But that of course means I cant run the kolla-ansible command anymore....

I tried installing DNF on centos, so that I could switch to python3 before deploying with kolla-ansible, but that failed with strange errors (_during a neutron task I dont currently remember_) I could solve, like:

"Aborting, target uses selinux but python bindings (libselinux-python) aren't installed"

even though, to as far as I could assertain, they WERE installed.

So....is there an obvious way I should handle this? CentOS 8? The *oogling and fiddling I've done so far with CentOS 8 gave me the impression there were other issues there with correct versions of qemu, or whatever it was.

2019-12-20 04:05:25 -0500 问了问题 cinder backup with swift backen fails with kolla-ansible

I am using kolla-ansible to setup up my cloud with cinder-backup using swift.

Now, when testing with the train-release, cinder backup seems to be badly configured by kolla-ansible.

I get:

2019-12-20 11:00:45.049 6 INFO cinder.cmd.backup [-] Backup running with 1 processes. 2019-12-20 11:00:46.016 6 WARNING oslo_config.cfg [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Deprecated: Option "idle_timeout" from group "database" is deprecated. Use option "connection_recycle_time" from group "database". 2019-12-20 11:00:46.510 6 INFO cinder.rpc [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Automatically selected cinder-backup objects version 1.38 as minimum service version. 2019-12-20 11:00:46.529 6 INFO cinder.rpc [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Automatically selected cinder-backup RPC version 2.1 as minimum service version. 2019-12-20 11:00:46.543 6 INFO cinder.rpc [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Automatically selected cinder-volume objects version 1.38 as minimum service version. 2019-12-20 11:00:46.564 6 INFO cinder.rpc [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Automatically selected cinder-volume RPC version 3.16 as minimum service version. 2019-12-20 11:00:46.649 6 INFO oslo_service.service [req-962f990c-345b-429d-8224-f16c6073edb4 - - - - -] Starting 1 workers 2019-12-20 11:00:46.679 29 INFO cinder.service [-] Starting cinder-backup node (version 15.0.0) 2019-12-20 11:00:46.768 29 INFO cinder.backup.manager [req-820d708c-bf49-4d3f-952c-a88041e1b3a7 - - - - -] Cleaning up incomplete backup operations. 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift [-] Can not get Swift capabilities during backup driver initialization.: ClientException: Capabilities GET failed: http://172.16.103.111:8080/info 401 Unauthorized [first 60 chars of response] {"error": {"message": "The request you have made requires au 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift Traceback (most recent call last): 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift File "/var/lib/kolla/venv/lib/python2.7/site-packages/cinder/backup/drivers/swift.py", line 393, in check_for_setup_error 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift
conn.get_capabilities() 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift File "/var/lib/kolla/venv/lib/python2.7/site-packages/swiftclient/client.py", line 1928, in get_capabilities 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift return get_capabilities((parsed, self.http_conn[1])) 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift File "/var/lib/kolla/venv/lib/python2.7/site-packages/swiftclient/client.py", line 1565, in get_capabilities 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift resp, 'Capabilities GET failed', body) 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift ClientException: Capabilities GET failed: http://172.16.103.111:8080/info 401 Unauthorized [first 60 chars of response] {"error": {"message": "The request you have made requires au 2019-12-20 11:00:46.826 29 ERROR cinder.backup.drivers.swift 2019-12-20 11:00:46.830 29 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'cinder.backup.manager.BackupManager._setup_backup_driver' failed: ClientException: Capabilities GET failed: http://172.16.103.111:8080/info 401 Unauthorized [first 60 chars of response] {"error": {"message": "The request you have made requires au

It seems the authentication is not properly configured. Here is some of the params ... (more)

2019-12-13 02:44:03 -0500 评论回答 kolla-ansible deploy fail for rabbitmq

I'm starting to get the feeling that this is not only related to the hosts-file. Since if I manually start the epmd server in the kolla_toolbox container, rabbitmq will start fine...?

2019-12-11 07:05:19 -0500 评论回答 kolla-ansible deploy fail for rabbitmq
2019-12-11 07:05:09 -0500 评论回答 kolla-ansible deploy fail for rabbitmq

If I understood that bug, it had to do with re-running kolla-ansible, and I hit this issue on first run/install. Additionally, my hosts file looks quite alright, although it has duplicate (correct) entries. Also worth mentioning, is I am running multinode setup, but with same error on both ctrls.

2019-12-10 14:54:42 -0500 评论回答 kolla-ansible deploy fail for rabbitmq

Could you clarify where you made a change? (And quite strange to run in to this STILL, when I'am using kolla-ansible from git (TRAIN release)...)

2019-12-01 09:25:18 -0500 获得奖牌  热门的问题 (source)
2019-12-01 09:25:18 -0500 获得奖牌  受欢迎的问题 (source)
2019-12-01 09:25:18 -0500 获得奖牌  著名问题 (source)
2019-07-09 16:33:53 -0500 获得奖牌  著名问题 (source)
2018-11-07 03:38:35 -0500 获得奖牌  著名问题 (source)
2018-10-02 00:53:15 -0500 评论问题 openstack volume migrate stuck

Have you(are you able) to start your cinder service with debug enabled, to see if there are any low-level complaints? I cant see which version you are on?

2018-09-27 08:53:00 -0500 获得奖牌  著名问题 (source)
2018-09-10 01:27:59 -0500 评论问题 [SOLVED][Pike][Octavia] Cannot create LB from Horizon

I dont know about the horizon issue, but since the goal (afaik) is to move all CLI logik into the openstack-cli, why do you keep want to use neutron? I brief guess..there are several versions of LBAAS api's. Is neutron/horizon using the wrong one?

2018-08-30 17:24:26 -0500 获得奖牌  热门的问题 (source)
2018-08-30 17:24:26 -0500 获得奖牌  著名问题 (source)
2018-08-16 07:39:00 -0500 获得奖牌  受欢迎的问题 (source)
2018-08-16 07:39:00 -0500 获得奖牌  热门的问题 (source)
2018-08-16 06:38:51 -0500 获得奖牌  著名问题 (source)
2018-06-25 02:50:28 -0500 回答问题 instance launched with flavour doesn't add required Hard disk

Afaik: Assuming your using a cloud-enabled version of linux, the diskconfig should be done by cloud init on first boot.

https://cloudinit.readthedocs.io/en/l...

2018-06-11 07:05:00 -0500 评论回答 openstack kolla - HA databases unable to start

Since you (well deservedly) set me up for a RTFM last time, I did try harder this time, but I cant find any mention of "kolla-ansible STOP" command in the list at :

https://docs.openstack.org/kolla-ansi...

A million thanks for your help

2018-06-11 01:59:52 -0500 问了问题 openstack kolla - HA databases unable to start

I've installed an 8 node cloud with openstack kolla ansible, with dual controllers.

It all works fine, until I stop and start it. (Which is related to my earlier question on stopping a cloud)

On boot, the mariadb/galera components dont seem to be able to start/sync.

I started the first controller and maria db log says: (from the previous shutdown)

2018-06-08 17:26:00 140662590556928 [Note] InnoDB: Online DDL : Completed
2018-06-08 20:12:08 140664414825216 [Note] /usr/libexec/mysqld: Normal shutdown

2018-06-08 20:12:08 140664414825216 [Note] WSREP: Stop replication
2018-06-08 20:12:08 140664414825216 [Note] WSREP: Closing send monitor...
2018-06-08 20:12:08 140664414825216 [Note] WSREP: Closed send monitor.
2018-06-08 20:12:11 140664414825216 [Note] WSREP: gcomm: terminating thread
2018-06-08 20:12:11 140664414825216 [Note] WSREP: gcomm: joining thread
2018-06-08 20:12:11 140664414825216 [Note] WSREP: gcomm: closing backend
2018-06-08 20:12:15 140664414825216 [Note] WSREP: (958d6a22, 'tcp://172.16.101.100:4567') turning message relay requesting on, nonlive peers: tcp://172.16.101.109:4567
2018-06-08 20:12:16 140664414825216 [Note] WSREP: (958d6a22, 'tcp://172.16.101.100:4567') reconnecting to 8361901d (tcp://172.16.101.109:4567), attempt 0
180611 08:44:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/
180611 08:44:29 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql//wsrep_recovery.zFLp3L' --pid-file='/var/lib/mysql//ctrl1.lab1.stack-recover.pid'
2018-06-11  8:44:29 139883731843264 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 184 ...
180611 08:44:51 mysqld_safe WSREP: Recovered position 6f86c600-6b10-11e8-97f6-9b647609adad:43931
2018-06-11  8:44:52 140457847679168 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 220 ...
2018-06-11  8:44:52 140457847679168 [Note] WSREP: Read nil XID from storage engines, skipping position init
2018-06-11  8:44:52 140457847679168 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
2018-06-11  8:44:52 140457847679168 [Note] WSREP: wsrep_load(): Galera 3.16(r5c765eb) by Codership Oy <info@codership.com> loaded successfully.
2018-06-11  8:44:52 140457847679168 [Note] WSREP: CRC-32C: using "slicing-by-8" algorithm.
2018-06-11  8:44:52 140457847679168 [Note] WSREP: Found saved state: 6f86c600-6b10-11e8-97f6-9b647609adad:-1
2018-06-11  8:44:52 140457847679168 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 172.16.101.100; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.listen_addr = tcp://172.16.101.100:4567; gmcast.segment = 0; gmcast.version = 0; ist.recv_addr = 172.16.1
2018-06-11  8:44:52 140457752131328 [Note] WSREP: Service thread ...
(more)
2018-06-06 05:20:59 -0500 问了问题 iptables blocking traffic from VM (kolla ansible multinode))

I've got a number of different cloud setup with multiple nodes, with kolla ansible, queens. The ones I'm having problems (mostly at least) with have one node each for controller, neutron, compute, and storage.

I've been troubleshooting the metadata proxy service since the booting VM couldnt access 16.254.., and I found weird errors in the logs.

It turns out after a while, that my VMs cant even ping their router on their virtual network. The router though has no issues pinging the vm instance though (after security groups are opened of course).

Tcpdumping on the linux bridge on the compute nodes shows the ICMP messages being sent, but never forwarded towards the ovs interfaces, and being a BSD/PF lover I tried all I could to NOT have to look at the disgusting iptables.

And now that I have, I can see (I think) that the iptables are throwing away my unicast packets towards the router. - WTF???

I checked with my other working cloud, and found a slight difference in the iptables.

On the NON-WORKING node I've got a chain that looks:

Chain neutron-openvswi-o1ec43cbb-d (2 references)
 pkts bytes target     prot opt in     out     source               destination
    2   658 RETURN     udp  --  *      *       0.0.0.0              255.255.255.255      udp spt:68 dpt:67 /* Allow DHCP client traffic. */
 3433  286K neutron-openvswi-s1ec43cbb-d  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:68 dpt:67 /* Allow DHCP client traffic. */
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:67 dpt:68 /* Prevent DHCP Spoofing by VM. */
   31  2604 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* Direct packets associated with a known session to the RETURN chain. */
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            state INVALID /* Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */
 3402  284K neutron-openvswi-sg-fallback  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* Send unmatched traffic to the fallback chain. */

Whereas on a WORKING node:

Chain neutron-openvswi-o18675d60-8 (2 references)
 pkts bytes target     prot opt in     out     source               destination
    2   658 RETURN     udp  --  *      *       0.0.0.0              255.255.255.255      udp spt:68 dpt:67 /* Allow DHCP client traffic. */
 1073 97955 neutron-openvswi-s18675d60-8  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:68 dpt:67 /* Allow DHCP client traffic. */
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:67 dpt:68 /* Prevent DHCP Spoofing by VM. */
  284 32113 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* Direct packets associated with a known session to the RETURN chain. */
  789 65842 RETURN ...
(more)
2018-06-05 06:41:14 -0500 问了问题 restarting/stopping multinode openstack kolla cloud

I think i read somewhere that the normal shutdown procedure for openstack nodes should be "compute, storage, network, controller" in that order.

But how does this apply in an openstack kolla ansible environment, when also got stuff in containers.

The kolla-ansible command does not have (please add!!!) an "shutdown" command, which would be very helpful.

Should I maybe even terminate certain docker containers in a certain order, to ensure service/database consistency?

Am asking since I am running my nodes in kvm vms (test environment) and I am starting to suspect that I get non-volatile errors when shutting down, (or starting up?) in the "wrong" order...

Any help will be very appreciated.

2018-05-30 16:05:52 -0500 评论问题 [Tacker] How to forcefully remove stale vnf's from the list

And now, a year after...this is now a problem for me....

2018-05-24 02:50:38 -0500 评论问题 glance cannot create multiple volumes simultaneously

I would not have thought that its "common" to consider running cinder-volume on the control node, where I would try to isolate my api-services. I would attempt to run those on custom storage nodes...?

2018-05-23 03:43:54 -0500 获得奖牌  著名问题 (source)
2018-05-22 03:26:36 -0500 回答问题 How does traffic flow in OVS/VXLAN networks

Regarding Q1, the first packet would normally be an ARP broadcast, which would be flooded to all compute nodes, causing MAC-learning of the source on the way forwarding. The ARP-response would then cause MAC-learning for the original requested destination, and voila! :-)

With l2pop I dont know the actual details but since the whole idea is to pre-populate the FDB of the OVS, I have simply assumed that OVSDB or ovs-cli-tools are used to populate the FDB of each OVS on each compute node.

I cant find the reference now, but I think I've read somewhere that the OVS tag in bridges, is not a 802.1Q tag, meaning its 24..32 bits or something like that, and doesnt have the limitation of max ~4000 vlans.

2018-05-14 05:41:08 -0500 获得奖牌  热门的问题 (source)
2018-05-14 03:41:27 -0500 评论问题 Enable ceilometer in queens using kolla-ansible

Can you verify that gnocci is up and running? I've several times run into the issue(at least in multi-node configs) that the gnocci container is broken, with a mssing gnocci_config in the ~gnocci/tmp/ directory..

2018-05-14 03:39:14 -0500 获得奖牌  著名问题 (source)
2018-05-13 16:47:37 -0500 获得奖牌  受欢迎的问题 (source)
2018-05-12 20:21:04 -0500 获得奖牌  热门的问题 (source)
2018-05-09 13:37:22 -0500 问了问题 inconsistencies in openstack table columns using CLI

I dont know if I am misunderstanding something, but I find this to be quite confusing, when it comes to the requirement on case-sensitivity on column names in CLI output...

[root@compute2 ~(admin)]# oo user list -c Name -c id --project admin
+-------+
| Name  |
+-------+
| admin |
+-------+
[lab0]:admin@admin
[root@compute2 ~(admin)]# oo user list -c Name -c ID --project admin
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 99c5644b334b458e95f3f135f7b426cf | admin |
+----------------------------------+-------+
[lab0]:admin@admin
[root@compute2 ~(admin)]# oo user list -c Name -c id --project admin
+-------+
| Name  |
+-------+
| admin |
+-------+
[lab0]:admin@admin
[root@compute2 ~(admin)]# oo user show admin -c Name -c id
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 99c5644b334b458e95f3f135f7b426cf |
+-------+----------------------------------+
[lab0]:admin@admin
[root@compute2 ~(admin)]# oo user show admin -c Name -c ID
No recognized column names in [u'Name', u'ID']. Recognized columns are (u'domain_id', u'enabled', u'id', u'name', u'options', u'password_expires_at').

Personally I would prefer case-insensitivity, but if not, then should it not at least be consistent?

2018-05-08 06:04:29 -0500 评论问题 api-services only bind to INTERNAL api address when not using haproxy with kolla-ansible

The only ex I've found (reg. api-iface) is "The admin API endpoint allows modifying users and tenants by default, while the public and internal APIs do not allow these operations." But if you read the Identity API I cant see anything of "POST v3/projects/" requires you to use admin iface.

2018-05-08 05:55:08 -0500 评论问题 api-services only bind to INTERNAL api address when not using haproxy with kolla-ansible

A pointer to the discussion on the major security issue with binding to 0.0.0.0 would be nice as well, since I would be surprised if anyone puts the controller node reachable without passing through firewalls..

2018-05-08 05:31:56 -0500 评论问题 api-services only bind to INTERNAL api address when not using haproxy with kolla-ansible

Its the "admin only allows admin task", that is highly confusing. WHAT/Which operation is an admin task? Where is it clearly defined which tasks need to be performed on the admin interface?!? Its not very clear since I though the user role assignment was there to handle authorization.