Ask Your Question

amkgi's profile - activity

2019-08-22 05:51:36 -0500 received badge  Notable Question (source)
2019-08-22 05:51:36 -0500 received badge  Popular Question (source)
2019-03-28 03:11:31 -0500 asked a question Network issues when adding a new external network

I want to add new external network. But when I added new bridge to config openvswitch_agent.ini and restart l3-agent and openvswitch-agent one of the networks stops working. This network in openvswitch uses the same bond as the new network.

New bridge:

Bridge "br-ex2"
    Controller "tcp:127.0.0.1:6633"
    fail_mode: secure
    Port "bond1.83"
        Interface "bond1.83"
    Port "phy-br-ex2"
        Interface "phy-br-ex2"
            type: patch
            options: {peer="int-br-ex2"}
    Port "br-ex2"
        Interface "br-ex2"
            type: internal

Old bridge:

Bridge br-ex
    Controller "tcp:127.0.0.1:6633"
    fail_mode: secure
    Port br-ex
        Interface br-ex
            type: internal
    Port phy-br-ex
        Interface phy-br-ex
            type: patch
            options: {peer=int-br-ex}
    Port "bond1.550"
        Interface "bond1.550"

bond1:

NAME=bond1
BONDING_MASTER=yes
MTU=9000
BOOTPROTO=none
BONDING_OPTS="miimon=100 mode=active-backup"
DEVICE=bond1
TYPE=Bond
ONBOOT=yes
NM_CONTROLLED=no

bond1.550:

DEVICE=bond1.550
NAME=bond1.550
BOOTPROTO=none
ONPARENT=yes
VLAN=yes
NM_CONTROLLED=no

bond1.83:

DEVICE=bond1.83
NAME=bond1.83
BOOTPROTO=none
ONPARENT=yes
VLAN=yes
NM_CONTROLLED=no

openvswitch_agent.ini:

[agent]
tunnel_types = gre,vxlan
l2_population = True

[ovs]
bridge_mappings = external:br-ex,dmz:br-dmz,external2:br-ex2
local_ip = 10.10.21.3

[securitygroup]
firewall_driver = iptables_hybrid

l3_agent.ini:

[DEFAULT]
interface_driver = openvswitch
external_network_bridge =

ml2_conf.ini on controllers:

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre,vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = external,external2,dmz

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ml2_type_vlan]
network_vlan_ranges = vlan:1000:2999

[ml2_type_vxlan]
vni_ranges = 1001:2999

[securitygroup]
enable_ipset = true

Afther adding extenral2 in openvswitch_agent.ini, "external" not work, but DMZ network works fine. If I delete external2:br-ex2 from openvswitch_agent.ini, "external" start work.

I can't add new network cards to the server and I can't disband bond, we need fault tolerance at the interface level. Maybe I missed something in l3-agent configurations or something else?

2019-03-11 00:50:36 -0500 answered a question "nova-manage cell_v2 discover_hosts" finds non-existing nodes.

I did su -s /bin/sh -c "nova-manage db archive_deleted_rows --until-complete --verbose" nova and after I didn't see more 45 nodes, I saw only 20 nodes. Next I connected to nova db and manually remove osk013/osk014 in table 'compute_nodes'. Now, I see "There are 14 compute resource providers and 18 compute nodes in the deployment". That is correct. 4 nodes I don't add "[placement]" in nova config yet.

2019-03-10 23:04:23 -0500 asked a question "nova-manage cell_v2 discover_hosts" finds non-existing nodes.

I updated OpenStack from Newton to Ocata and I have some problems with nova-placement. Nova finds non-existing nodes. I tried to remove these hosts from the cell su -s /bin/sh -c "nova-manage cell_v2 delete_host --cell_uuid 5eccb670-bccc-4816-bd13-806da5649d66 --host osk013(osk014)" nova, but this has no effect. They will appear again when I do a "nova-manage cell_v2 discover_hosts"

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 5eccb670-bccc-4816-bd13-806da5649d66
Found 20 computes in cell: 5eccb670-bccc-4816-bd13-806da5649d66
Checking host mapping for compute host 'osc006': 14c7f1e4-e74c-42c0-8cd0-3980950ea816
Checking host mapping for compute host 'osk013': e6464929-fe24-44e0-86f6-1c3d10fe7ad9 (!)
Creating host mapping for compute host 'osk013': e6464929-fe24-44e0-86f6-1c3d10fe7ad9 (!)
Checking host mapping for compute host 'osc007': 6d080284-b541-4b93-8681-4afb64b4bf2f
Checking host mapping for compute host 'osk014': 464d24c4-1baf-4d8f-a7e9-da9dbad571dc (!)
Creating host mapping for compute host 'osk014': 464d24c4-1baf-4d8f-a7e9-da9dbad571dc (!)
Checking host mapping for compute host 'osc009': 678d0dd0-9d26-43c0-b444-f5d00dacb464
Checking host mapping for compute host 'osc010': 9aef642b-7051-4b64-a711-8338906b44ce
Checking host mapping for compute host 'osc011': 073823ca-949c-46ce-aa3d-55e51007d4f4
Checking host mapping for compute host 'osc008': 9e622863-4f29-4549-8fda-788de188e705
Checking host mapping for compute host 'osc012': b2531dd4-370a-40bd-91ce-d8ca05b763f3
Checking host mapping for compute host 'osc013': 0416d93c-84e2-4d32-9f10-9b923aa2d9bb
Checking host mapping for compute host 'osc014': 4d1b2b8f-4d7a-421e-a117-fa2bd4067b63
Checking host mapping for compute host 'osc015': 1a6d2657-8ccc-4a26-be7d-1632cdd49a64
Checking host mapping for compute host 'osc016': 0e7b8492-1e5f-4e8c-aab9-92294bcf36a9
Checking host mapping for compute host 'osc017': f6fe97e6-3e1c-403c-9096-4a26f97e4844
Checking host mapping for compute host 'osc018': 55a726d6-1e9e-4229-aa0c-88bcedf1f688
Checking host mapping for compute host 'osc019': b30d9f30-cc1b-4048-b1aa-dedc1798a3c0
Checking host mapping for compute host 'osc020': 9c73f396-f811-492f-8b5a-c653d2762cb2
Checking host mapping for compute host 'osc023': bacc2922-dc2f-48e8-ab5d-33135204c047
Checking host mapping for compute host 'osc021': 2d1ccc0d-98f8-42b9-9703-8cc3e11ab50f
Checking host mapping for compute host 'osc022': cac78822-5278-4276-831b-7fd9e0e655eb

openstack compute service list
+-----+------------------+-----------+----------+---------+-------+----------------------------+
|  ID | Binary           | Host      | Zone     | Status  | State | Updated At                 |
+-----+------------------+-----------+----------+---------+-------+----------------------------+
| 104 | nova-compute     | osc006 | nova     | enabled | up    | 2019-03-11T03:42:23.000000 |
| 107 | nova-compute     | osc007 | nova     | enabled | up    | 2019-03-11T03:42:19.000000 |
| 113 | nova-compute     | osc009 | nova     | enabled | up    | 2019-03-11T03:42:22.000000 |
| 117 | nova-compute     | osc010 | nova     | enabled | up    | 2019-03-11T03:42:24.000000 |
| 120 | nova-compute     | osc011 | nova     | enabled | up    | 2019-03-11T03:42:22.000000 |
| 152 | nova-compute     | osc008 | nova     | enabled | up    | 2019-03-11T03:42:29.000000 |
| 153 | nova-consoleauth | osx001 | internal | enabled | up    | 2019-03-11T03:42:24.000000 |
| 154 | nova-conductor   | osx001 | internal | enabled | up    | 2019-03-11T03:42:27.000000 |
| 155 | nova-scheduler   | osx001 | internal | enabled | up    | 2019-03-11T03:42:27.000000 |
| 156 | nova-scheduler   | osx002 | internal | enabled | up    | 2019-03-11T03:42:26.000000 |
| 157 | nova-conductor   | osx002 | internal | enabled | up    | 2019-03-11T03:42:27.000000 |
| 158 | nova-consoleauth | osx002 | internal | enabled | up    | 2019-03-11T03:42:29.000000 |
| 159 | nova-scheduler   | osx003 | internal | enabled | up    | 2019-03-11T03:42:27.000000 |
| 160 | nova-consoleauth | osx003 | internal | enabled | up    | 2019-03-11T03:42:20.000000 |
| 161 | nova-conductor   | osx003 | internal | enabled | up    | 2019-03-11T03:42:27.000000 |
| 184 | nova-compute     | osc012 | nova     | enabled | up    | 2019-03-11T03:42:19.000000 |
| 190 | nova-compute     | osc013 | nova     | enabled | up    | 2019-03-11T03:42:21.000000 |
| 196 | nova-compute     | osc014 | nova     | enabled | up    | 2019-03-11T03:42:25.000000 |
| 199 | nova-compute     | osc015 | nova     | enabled | up    | 2019-03-11T03:42:24.000000 |
| 202 | nova-compute     | osc016 | nova     | enabled | up    | 2019-03-11T03:42:20.000000 |
| 208 | nova-compute     | osc017 | nova     | enabled | up    | 2019-03-11T03:42:23.000000 |
| 214 | nova-compute     | osc018 | nova     | enabled | up    | 2019-03-11T03:42:20.000000 ...
(more)
2019-03-05 06:59:36 -0500 asked a question Why auto evacuation does not work?

When we tested OpenStack Kilo we check this function. We disabled one of the compute nodes and instances recreated on another node. But now we use Ocata and not so long ago one of the compute nodes was unavailable. Instances didn't recreate automatically on other nodes. Is this feature no longer available in new releases?

2019-03-02 06:54:18 -0500 received badge  Popular Question (source)
2019-02-25 00:23:13 -0500 answered a question Live migration doesn't work after upgrade controllers from Newton to Ocata

The problem was due to the change of controller hostnames. I forgot to change the hosts file on the computes.

2019-02-22 07:33:55 -0500 asked a question Live migration doesn't work after upgrade controllers from Newton to Ocata

For updates I deployed new controllers on VMs and install all packets from Ocata repos. I used new hostnames for new controllers, but I didn't changed IP-addresses. Afther all db sync, I checked all services, but live migration doesn't work. Nova list works fine, glance, cinder, neutron too. I can launch instances, deleted their, associate IPs and more, but only live and normal migrate doesn't work.

openstack server migrate --live compute01 instance01 --debug:

clean_up MigrateServer: Unable to establish connection to http://controller:8774/v2.1/20d6b9dc5e264779a40dfb334197ae67/servers/f6b517e0-b3cf-4a77-8e17-3ec315476830/action: ('Connection aborted.', BadStatusLine("''",))
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 135, in run
    ret_val = super(OpenStackShell, self).run(argv)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run
    result = self.run_subcommand(remainder)
  File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 180, in run_subcommand
    ret_value = super(OpenStackShell, self).run_subcommand(argv)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in run_subcommand
    result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, in run
    return super(Command, self).run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/command.py", line 90, in run
    return self.take_action(parsed_args) or 0
  File "/usr/lib/python2.7/site-packages/openstackclient/compute/v2/server.py", line 1084, in take_action
    disk_over_commit=parsed_args.disk_overcommit,
  File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 402, in substitution
    return methods[-1].func(obj, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 459, in live_migrate
    disk_over_commit)
  File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 402, in substitution
    return methods[-1].func(obj, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1698, in live_migrate
    'disk_over_commit': disk_over_commit})
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1908, in _action
    info=info, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1919, in _action_return_resp_and_body
    return self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 223, in post
    return self.request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 74, in request
    **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 374, in request
    resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 142, in request
    return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/osc_lib/session.py", line 40, in request
    resp = super(TimingSession, self).request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
    return wrapped(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 616, in request
    resp = send(**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 690, in _send_request
    raise exceptions.ConnectFailure(msg)
ConnectFailure: Unable to establish connection to http://controller:8774/v2.1/20d6b9dc5e264779a40dfb334197ae67/servers/f6b517e0-b3cf-4a77-8e17-3ec315476830/action: ('Connection aborted ...
(more)
2019-02-22 04:20:36 -0500 received badge  Supporter (source)
2019-02-19 09:12:14 -0500 received badge  Notable Question (source)
2019-02-19 01:54:12 -0500 received badge  Popular Question (source)
2019-02-18 08:32:01 -0500 asked a question Do I need to have a separate server for cinder-volume if ceph is used?

The previous administrator has made several separate cinder-volume servers on VMs, but I don't understand why this is necessary, and I can't find information where someone could recommend doing this when only ceph is used. If I configure cinder-volume on the controllers, can it adversely affect performance of Cinder or controllers? I want to remove cinder-volume VMs, if they aren't needed.

2019-01-17 15:16:09 -0500 received badge  Notable Question (source)
2019-01-17 15:16:09 -0500 received badge  Famous Question (source)
2019-01-09 03:11:43 -0500 received badge  Popular Question (source)
2019-01-07 20:45:48 -0500 received badge  Editor (source)
2019-01-07 08:04:45 -0500 asked a question TripleO UI doesn't apply NtpServer in Base resources configuration. Why?

My overcloud hasn't access to the Internet and I setting up specific NTP servers (10.10.0.5,10.10.0.6) in our local network in TripleO UI (Edit Configuration -> Parameters -> Base resources configuration). When deploying overcloud, I get an error message that the controller cannot synchronize time.

{
  "msg": "non-zero return code",
  "start": "2019-01-07 06:39:35.553515",
  "stderr": "Error resolving pool.ntp.org: Name or service not known (-2)\n 7 Jan 06:39:35 ntpdate[17812]: Can't find host pool.ntp.org: Name or service not known (-2)\n 7 Jan 06:39:35 ntpdate[17812]: no servers can be used, exiting",
  "stderr_lines": [
    "Error resolving pool.ntp.org: Name or service not known (-2)",
    " 7 Jan 06:39:35 ntpdate[17812]: Can't find host pool.ntp.org: Name or service not known (-2)",
    " 7 Jan 06:39:35 ntpdate[17812]: no servers can be used, exiting"
  ],
  "stdout_lines": [],
  "stdout": "",
  "_ansible_no_log": false,
  "invocation": {
    "module_args": {
      "creates": null,
      "executable": null,
      "_uses_shell": false,
      "_raw_params": "ntpdate -u pool.ntp.org",
      "removes": null,
      "warn": true,
      "chdir": null,
      "stdin": null
    }
  },
  "rc": 1,
  "changed": true,
  "_ansible_parsed": true,
  "delta": "0:00:00.027991",
  "cmd": [
    "ntpdate",
    "-u",
    "pool.ntp.org"
  ],
  "end": "2019-01-07 06:39:35.581506"
}

Why does it use pool.ntp.org? Why doesn't it use 10.10.0.5/10.10.0.6?

I use stable release TripleO Rocky.

2019-01-06 00:18:48 -0500 received badge  Enthusiast
2019-01-05 08:36:56 -0500 answered a question TripleO Overcloud installation failed

I had this problem. My overcloud has compute on bare metal and a controller on a VM. Failed to deploy the controller. I added some resources to the CPU, RAM and HDD, and the problem disappeared. Maybe you have too small resources on your bare metal nodes too. Take a look "openstack flavor list".

2018-12-31 02:00:29 -0500 received badge  Famous Question (source)
2018-12-28 03:12:45 -0500 received badge  Notable Question (source)
2018-12-28 01:14:09 -0500 answered a question How migrate instances between clouds with the same storage?

I decided it this way. I created similar instance in OpenStack Rocky. Looked at UUID disk in OpenStack Rocky and Newton, I connected to the Ceph and rename old UUID of disk to new UUID (rbd -p vms mv oldUUID_disk newUUID_disk). Before doing this, I deleted the disc that Rocky created. I shutted off both instance at this moment. This can do using a script to automate this process and minimize downtime.

2018-12-27 21:28:06 -0500 received badge  Popular Question (source)
2018-12-27 09:28:16 -0500 commented question How migrate instances between clouds with the same storage?

I tried to migrate using virsh migrate and it was successful, but now I need to add the necessary information about this instance to the database so that it appears in OpenStack. But I don't quite understand how I can do it right.

2018-12-27 05:51:12 -0500 asked a question How migrate instances between clouds with the same storage?

I have two OpenStack clouds. The first is Newton, and the second is Rocky. Both use the same Ceph. I find a lot of information on how I can do this (use snapshots (but it may take a long time)). In Newton I have 250 instances, and many of them are very critical for downtime. Is there still no way to do it easily and quickly with used single storage?