Ask Your Question

dendvk's profile - activity

2018-11-09 04:15:24 -0600 received badge  Famous Question (source)
2018-10-30 02:46:39 -0600 received badge  Organizer (source)
2018-10-29 04:28:44 -0600 received badge  Famous Question (source)
2018-10-26 04:47:48 -0600 answered a question unexpected keyword argument 'source_replica'

Hello.

Faced with the same behavior on Rocky. Here is that I've found: https://bugs.launchpad.net/python-openstackclient/+bug/1786479, https://bugs.launchpad.net/rally/+bug/1785519

Here is the patch that solves this problem (at least on my stack): https://github.com/openstack/python-openstackclient/commit/3e5a2d227c813937ad7ea23d334f7d23c1f8a2cc

P.S. "At least 10 karma points is required to post links", sorry

2018-10-24 06:59:03 -0600 received badge  Popular Question (source)
2018-10-24 06:59:03 -0600 received badge  Notable Question (source)
2018-10-12 17:03:20 -0600 received badge  Notable Question (source)
2018-10-10 04:25:55 -0600 asked a question It is impossible to release IP from instance: 'remove_fixed_ip' string is displayed and nothing happens

Hello.

I'm trying to do simple thing - add fixed IP address to instance and remove it later. Here is the way I'm doing this:

[root@controller ~]# openstack server list
...
| 99efe143-322b-4c87-a84f-15ea003f9cda | centos7-instanceCOSBM-hvm  | ACTIVE  | provider=10.30.119.105 | centos7-hvm | centos7-hvm |
[root@controller ~]# openstack server add fixed ip 99efe143-322b-4c87-a84f-15ea003f9cda 2540cb0c-6076-4683-83b3-0ffd2984fa7b
[root@controller ~]# openstack server list
...
| 99efe143-322b-4c87-a84f-15ea003f9cda | centos7-instanceCOSBM-hvm  | ACTIVE  | provider=10.30.119.105, 10.30.119.111 | centos7-hvm | centos7-hvm |
[root@controller ~]# openstack server remove fixed ip 99efe143-322b-4c87-a84f-15ea003f9cda 10.30.119.111
remove_fixed_ip
[root@controller ~]#

And nothing happens. Using instance name instead of ID, doing this under admin credentials, using obsoleted CLI "ip fixed remove" does not help or change behavior somehow. Nova logs do not track any errors, but neutron log does:

2018-10-10 05:19:05.135 2007 WARNING neutron.pecan_wsgi.controllers.root [req-c4e5b0d0-a577-430b-804f-680f4e092891 1cf4cb98f535455b8b9ca79ed663b8d1 a8cbae18e3b04f64b9ffcc3b66656a2f - default default] No controller found for: floatingips - returning response code 404: PecanNotFound
2018-10-10 05:19:05.136 2007 INFO neutron.pecan_wsgi.hooks.translation [req-c4e5b0d0-a577-430b-804f-680f4e092891 1cf4cb98f535455b8b9ca79ed663b8d1 a8cbae18e3b04f64b9ffcc3b66656a2f - default default] GET failed (client error): The resource could not be found.

Trying to do this both on Queens and Rocky - same result.

Is this a bug or some kind of misconfiguration of my stack?

Versions:

python2-ncclient-0.4.7-5.el7.noarch
python2-glanceclient-2.12.1-1.el7.noarch
python2-oauth2client-1.5.2-3.el7.1.noarch
python2-openstackclient-3.16.1-1.el7.noarch
python2-keystoneclient-3.17.0-1.el7.noarch
python2-neutronclient-6.9.0-1.el7.noarch
python2-barbicanclient-4.7.0-1.el7.noarch
python-openstackclient-lang-3.16.1-1.el7.noarch
python2-cinderclient-4.0.1-1.el7.noarch
python2-swiftclient-3.6.0-1.el7.noarch
python2-designateclient-2.10.0-1.el7.noarch
python2-google-api-client-1.4.2-4.el7.noarch
python2-os-client-config-1.31.2-1.el7.noarch
python2-novaclient-11.0.0-1.el7.noarch

openstack-nova-scheduler-18.0.0-1.el7.noarch
openstack-cinder-13.0.0-1.el7.noarch
openstack-nova-common-18.0.0-1.el7.noarch
openstack-glance-17.0.0-1.el7.noarch
openstack-nova-novncproxy-18.0.0-1.el7.noarch
openstack-keystone-14.0.0-1.el7.noarch
openstack-neutron-13.0.1-1.el7.noarch
openstack-selinux-0.8.14-1.el7.noarch
openstack-dashboard-14.0.0-1.el7.noarch
openstack-nova-api-18.0.0-1.el7.noarch
openstack-nova-placement-api-18.0.0-1.el7.noarch
openstack-neutron-ml2-13.0.1-1.el7.noarch
centos-release-openstack-rocky-1-1.el7.centos.noarch
python2-openstacksdk-0.17.2-1.el7.noarch
openstack-nova-conductor-18.0.0-1.el7.noarch
openstack-neutron-linuxbridge-13.0.1-1.el7.noarch
openstack-neutron-common-13.0.1-1.el7.noarch
openstack-nova-console-18.0.0-1.el7.noarch
openstack-dashboard-theme-14.0.0-1.el7.noarch

Finally, bug posted: https://bugs.launchpad.net/nova/+bug/1800998 (https://bugs.launchpad.net/nova/+bug/...)

2018-09-27 03:48:09 -0600 received badge  Enthusiast
2018-09-25 22:45:56 -0600 received badge  Popular Question (source)
2018-09-25 01:32:31 -0600 received badge  Supporter (source)
2018-09-25 01:17:36 -0600 received badge  Scholar (source)
2018-09-25 01:17:05 -0600 commented answer How to upgrade between releases

Btw, if there aren't cumulative RNs for OpenStack in the whole, let consider this as Feature Request. But for me, this is a bug. Every RPM have pre- and post- sections, which are great for upgrade scripts, imho, like this "# su -s /bin/sh -c "nova-manage api_db sync" nova"...

2018-09-25 01:14:08 -0600 commented answer How to upgrade between releases

Bernd, thank you for your answer.

Yes, indeed, upgrade over one or several releases is the best way to potentially re-deploy your cloud from scratch. And also, I found release notes per project, and they were really helpful, f.e., to figure out the Keystone port changes. To be continuing...

2018-09-24 06:28:04 -0600 received badge  Editor (source)
2018-09-24 06:26:15 -0600 asked a question How to upgrade between releases

Hello.

Recent upgrade from Pike to Queens of my stack wasn't smooth, but after dance with yum ( -e centos-release-openstack-pike, -i centos-release-openstack-queens, "missing requires" and so so on) it finally happen and even works.

But upgrade from Queens to Rocky was terrible. Just having new repo installed (# yum install -y centos-release-openstack-rocky) and upgrading packages (# yum update -y) lead me to the situation with 1) database inconsistency; 2) absence of listener of keystone port 35357 for admin/internal endpoints.

After some digging I've fixed DB structure, and after reading keystone RNs - remapped all endpoints to 5000 port (by direct UPDATE in MySQL!! - the only way I could imagine).

But something deep inside me telling that I was just lucky several times and likely that I've done upgrade completely incorrectly.

My stack consists of one Controller node and several Compute nodes, and all of them were installed using official instructions and via yum (i.e. no automated-deployment tools). Digging into documentation, googling, asking Lord does not shed light to the main question - How to upgrade OpenStack nodes between releases in correct way? May be I missed something just near me? Please help.

2018-09-03 13:55:28 -0600 asked a question Exact firewall rules for metadata agent on controller node

Hello.

I'm using Queens Openstack release, controller and compute node deployed on latest CentOS7 release. Network is configured to use "provider network" model.

I've faced with problem, that built-in iptables rule on controller node blocks metadata request response to instances, so they are unable to retrieve ssh key and other data during cloud-init on first boot via http://169.254.169.254:80 url (which is routed in instance to metadata agent IP). Here it is:

-A FORWARD -j REJECT --reject-with icmp-host-prohibited

Last time I've solved very similar problem but related to DHCP requests by adding the following rule to /etc/sysconfig/iptables file prior "REJECT" one:

-A FORWARD -p udp -m udp --sport 67 --dport 68 -j ACCEPT

To resolve problem with metadata the following rules were added to the "FORWARD" chain on controller node and they works:

-A FORWARD -p tcp -m tcp --dport 80 -j ACCEPT
-A FORWARD -p tcp -m tcp --sport 80 -j ACCEPT

But I'm not sure that they are completely correct. Can someone share his experience for that scenario?

Another question will be - why init and run-time scripts of Openstack services on controller node does not care about iptables for all cases? Firewall driver was set correctly in nova/neutron, and some rules appear in iptables after restart of services in addition to existing ones, but not a necessary set to avoid intercommunication problems between controller and compute nodes.

Below is the full list of iptables rules:

[root@controller ~]# iptables-save
# Generated by iptables-save v1.4.21 on Fri Aug 31 03:03:24 2018
*nat
:PREROUTING ACCEPT [40331:4980395]
:INPUT ACCEPT [19152:1206716]
:OUTPUT ACCEPT [3424:208170]
:POSTROUTING ACCEPT [16794:2598368]
COMMIT
# Completed on Fri Aug 31 03:03:24 2018
# Generated by iptables-save v1.4.21 on Fri Aug 31 03:03:24 2018
*mangle
:PREROUTING ACCEPT [13992448:6525771277]
:INPUT ACCEPT [13783684:6505744476]
:FORWARD ACCEPT [206259:20684970]
:OUTPUT ACCEPT [13759465:6212240069]
:POSTROUTING ACCEPT [13965444:6232905496]
COMMIT
# Completed on Fri Aug 31 03:03:24 2018
# Generated by iptables-save v1.4.21 on Fri Aug 31 03:03:24 2018
*raw
:PREROUTING ACCEPT [13988200:6524208181]
:OUTPUT ACCEPT [13755278:6210758332]
:neutron-linuxbri-OUTPUT - [0:0]
:neutron-linuxbri-PREROUTING - [0:0]
-A PREROUTING -j neutron-linuxbri-PREROUTING
-A OUTPUT -j neutron-linuxbri-OUTPUT
COMMIT
# Completed on Fri Aug 31 03:03:24 2018
# Generated by iptables-save v1.4.21 on Fri Aug 31 03:03:24 2018
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [4556611:2056773792]
:neutron-filter-top - [0:0]
:neutron-linuxbri-FORWARD - [0:0]
:neutron-linuxbri-INPUT - [0:0]
:neutron-linuxbri-OUTPUT - [0:0]
:neutron-linuxbri-local - [0:0]
:neutron-linuxbri-sg-chain - [0:0]
:neutron-linuxbri-sg-fallback - [0:0]
-A INPUT -j neutron-linuxbri-INPUT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT
-A INPUT -p udp -m udp --sport 68 --dport 67 -j ACCEPT
-A INPUT -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A INPUT -p tcp -m ...
(more)