Ask Your Question

Wajahat's profile - activity

2019-10-19 22:43:57 -0500 received badge  Notable Question (source)
2019-10-17 15:05:10 -0500 asked a question Networking-SFC chain routing not working

I am trying to use Networking SFC extension to deploy a test service chain. I have been following the steps from this demo.

I am running Openstack Rocky release in CloudLab, and I have enabled the VXLAN data network. I am trying to create a simple routing topology like this figure:

image description

I create the six ports on the VXLAN network.

neutron port-list | grep p[0-9]
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
| 06526b62-5ef7-4e47-9d1f-b8e24b949c6c | p2   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:32:8a:28 | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.7"}      |
| 526a7e2c-5954-4d69-b1e2-2e8a46b9a472 | p3   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:ef:2a:dd | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.4"}      |
| c5fe9123-716a-4d37-ac14-2be3aba27bd6 | p6   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:fa:53:7b | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.34"}     |
| d4e71ec4-2e40-45f0-953a-12cedbf12a6b | p1   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:a8:0d:bd | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.3"}      |
| ed248660-de3c-4cb6-b59b-2bbfa9063a70 | p4   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:33:ca:cf | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.6"}      |
| fc98926d-59aa-42d5-9b62-33a8bae3ce1f | p5   | c99a228f51ff400da0eb33f74e0d0369 | fa:16:3e:a9:af:4f | {"subnet_id": "20297148-820a-4014-aedc-e1dc341fb8d8", "ip_address": "10.253.0.12"}

Then I create port-pairs:

neutron port-pair-create --ingress 06526b62-5ef7-4e47-9d1f-b8e24b949c6c --egress 526a7e2c-5954-4d69-b1e2-2e8a46b9a472 pp1
neutron port-pair-create --ingress ed248660-de3c-4cb6-b59b-2bbfa9063a70 --egress fc98926d-59aa-42d5-9b62-33a8bae3ce1f pp2

and port-pair-groups:

neutron port-pair-group-create --port-pairs pp1 pg1

neutron port-pair-group-create --port-pairs pp2 pg2

finally I create a flow-classifier and the port-chain to use that classifier:

neutron flow-classifier-create --protocol icmp --ethertype IPv4 --logical-source-port p1 --logical-destination-port p6 icmp_class
neutron port-chain-create --port-pair-group pg1 --port-pair-group pg2 --flow-classifier icmp_class --chain-parameters symmetric=true pc1

After this, I ping and traceroute from VM1 to VM4 (both running Ubuntu 14.04), and start tcpdump on VM2 and VM3 (eth1 interfaces associated with P2 and P4 respectively). But I don't see any changes to the packet path from VM1 to VM4 i.e. it still goes directly from VM1 to VM4 bypassing VM2 and VM3.

Does anyone know what step I may have missed or help me debug this?

I have not modified anything in the VMs i.e. (no static routes and no ip_forwarding - but if this was problem wouldn't the packets simply get dropped?).

2019-10-14 13:52:18 -0500 commented question Unable to create flow-classifier for service function chaining: The resource could not be found

I was able to resolve this by using apt-get install python-networking-sfc instead of using the pip install, and then configuring according to documentation.

2019-04-23 16:22:25 -0500 commented answer g-api did not start while running stack.sh

I never found a solution to this.

2019-02-27 03:24:16 -0500 received badge  Famous Question (source)
2019-02-05 01:47:37 -0500 commented answer Nova VM instances files change ownership to root after live migration

I am having the same problem, and using above creates permission denied errors when trying to live_migrate the instance. Could not access KVM kernel module: Permission denied

2019-02-05 01:47:37 -0500 received badge  Commentator
2018-11-15 15:48:41 -0500 commented answer Creating an instance with a specified name, or renaming an instance name after it got created?

I don't think it works in recent version, I do not see this option available in Pike. Has the command changed or completely removed?

2018-10-21 17:13:22 -0500 received badge  Famous Question (source)
2018-07-18 18:22:49 -0500 received badge  Notable Question (source)
2018-07-13 09:19:58 -0500 received badge  Popular Question (source)
2018-07-11 02:34:58 -0500 asked a question Unable to ssh into instance, route info failed

I am running Openstack Queens instance on Cloudlab. Everything seems to be running fine (installation is automated by Cloudlab Openstack profile), but when I try to ssh into an instance after booting it, it gives me no route to host error.

Here is the console log of the instance:

Cloud-init v. 0.7.5 running 'init' at Wed, 11 Jul 2018 06:19:03 +0000. Up 135.24 seconds.
ci-info: +++++++++++++++++++++++Net device info+++++++++++++++++++++++
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: | Device |  Up  |  Address  |    Mask   |     Hw-Address    |
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 |         .         |
ci-info: |  eth0  | True |     .     |     .     | fa:16:3e:e5:3f:31 |
ci-info: +--------+------+-----------+-----------+-------------------+
ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2018-07-11 06:19:03,940 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:04,956 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:05,968 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:06,980 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:07,994 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [4/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:09,039 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:11,053 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [7/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:13,067 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [9/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]
2018-07-11 06:19:15,085 - url_helper.py[WARNING]: Calling 'http ...
(more)
2018-07-09 21:54:01 -0500 received badge  Notable Question (source)
2018-07-03 15:33:28 -0500 commented question Cold migrate with target host specification

When I tried this I got : ERROR (UnsupportedVersion): The specified version isn't supported by server. The valid version range is '2.1' to '2.53' I think the Pike release does not support it but the Queens one does.

2018-07-03 15:20:07 -0500 received badge  Popular Question (source)
2018-07-02 22:11:28 -0500 commented question Cold migrate with target host specification

Where should I set this? In the admin-openrc.sh or just pass it as a parameter with migrate command?

2018-07-02 17:02:25 -0500 asked a question Cold migrate with target host specification

I have a running installation of Openstack Pike on Cloudlab. And I found this patch link which states that a target-host parameter was added for openstack server migrate command but I am unable to find any information on how to use it on the openstack CLI. The help gives me this which shows no target host parameter.

usage: openstack server migrate [-h] [--live <hostname>]
                                [--shared-migration | --block-migration]
                                [--disk-overcommit | --no-disk-overcommit]
                                [--wait]
                                <server>

Does anyone know how to make this work?

2018-06-08 03:44:00 -0500 received badge  Famous Question (source)
2018-06-08 03:44:00 -0500 received badge  Notable Question (source)
2018-06-08 03:44:00 -0500 received badge  Popular Question (source)
2018-05-22 15:03:58 -0500 commented question stack.sh stuck on get_random_port

It finally went ahead, turns out that lsof itself is taking really really long to execute even when I do it manually. So even though the sleep is of only 1 second, lsof itself takes several minutes to return.

2018-05-22 15:02:32 -0500 received badge  Popular Question (source)
2018-05-22 01:26:10 -0500 asked a question g-api did not start while running stack.sh

Trying to setup devstack, and it is throwing this error:

+functions-common:_run_under_systemd:1481  sudo systemctl enable devstack@g-api.service
+functions-common:_run_under_systemd:1482  sudo systemctl start devstack@g-api.service
+functions-common:run_process:1512         time_stop run_process
+functions-common:time_stop:2254           local name
+functions-common:time_stop:2255           local end_time
+functions-common:time_stop:2256           local elapsed_time
+functions-common:time_stop:2257           local total
+functions-common:time_stop:2258           local start_time
+functions-common:time_stop:2260           name=run_process
+functions-common:time_stop:2261           start_time=1526969688542
+functions-common:time_stop:2263           [[ -z 1526969688542 ]]
++functions-common:time_stop:2266           date +%s%3N
+functions-common:time_stop:2266           end_time=1526969689805
+functions-common:time_stop:2267           elapsed_time=1263
+functions-common:time_stop:2268           total=3482
+functions-common:time_stop:2270           _TIME_START[$name]=
+functions-common:time_stop:2271           _TIME_TOTAL[$name]=4745
+lib/glance:start_glance:350               echo 'Waiting for g-api (135.207.223.61) to start...'
Waiting for g-api (135.207.223.61) to start...
+lib/glance:start_glance:351               wait_for_service 60 http://135.207.223.61/image
+functions:wait_for_service:428            local timeout=60
+functions:wait_for_service:429            local url=http://135.207.223.61/image
+functions:wait_for_service:430            local rval=0
+functions:wait_for_service:431            time_start wait_for_service
+functions-common:time_start:2240          local name=wait_for_service
+functions-common:time_start:2241          local start_time=
+functions-common:time_start:2242          [[ -n '' ]]
++functions-common:time_start:2245          date +%s%3N
+functions-common:time_start:2245          _TIME_START[$name]=1526969689916
+functions:wait_for_service:432            timeout 60 bash -x
++::                                        curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' http://135.207.223.61/image
+::                                        [[ 503 == 503 ]]
+::                                        sleep 1
++::                                        curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' http://135.207.223.61/image
+::                                        [[ 503 == 503 ]]
+::                                        sleep 1
.
.
. (a bunch of same lines and then some)
.

    +functions-common:time_stop:2267           elapsed_time=60104
    +functions-common:time_stop:2268           total=134
    +functions-common:time_stop:2270           _TIME_START[$name]=
    +functions-common:time_stop:2271           _TIME_TOTAL[$name]=60238
    +functions:wait_for_service:438            return 124
    +lib/glance:start_glance:352               die 352 'g-api did not start'
    +functions-common:die:187                  local exitcode=0
    [Call Trace]
    ./stack.sh:1240:start_glance
    /opt/stack/devstack/lib/glance:352:die
    [ERROR] /opt/stack/devstack/lib/glance:352 g-api did not start
    Error on exit
    nova-compute(1716): Operation not permitted
    neutron-openvswitch-agent(1724): Operation not permitted
    neutron-openvswitch-agent: no process found
    cinder-volume(1693): Operation not permitted

which tells me that glance-api service is not running. So I take the command in the second line (log above) and execute it manually: sudo systemctl start devstack@g-api.service Then it runs and I can check status:

● devstack@g-api.service - Devstack devstack@g-api.service
   Loaded: loaded (/etc/systemd/system/devstack@g-api.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-05-22 01:20:56 EDT; 1h 0min ago
 Main PID: 11240 (uwsgi)
   Status: "uWSGI is ready"
   CGroup: /system.slice/system-devstack.slice/devstack@g-api.service
           ├─11240 glance-apiuWSGI maste
           ├─11242 glance-apiuWSGI worker
           ├─11243 glance-apiuWSGI worker
           ├─11244 glance-apiuWSGI worker
           └─11245 glance-apiuWSGI worker

May 22 01:20:59 agave021 devstack@g-api.service[11240]: DEBUG glance.common.config [-] oslo_policy.remote_ssl_client_key_file = None {{(pid=11243) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2908}}
May 22 01:20:59 agave021 devstack@g-api.service[11240]: DEBUG glance.common.config [-] oslo_policy.remote_ssl_verify_server_crt = False {{(pid=11243) log_opt_values /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py:2908}}
May ...
(more)
2018-05-21 16:57:45 -0500 marked best answer Invalid Shared Storage error on live-migration with GlusterFS

I have an Openstack cluster with 1 controller and 2 compute nodes at https://www.cloudlab.us/ (CloudLab) and I am trying to enable live-migration with GlusterFS shared storage following the steps listed http://yongluo2013.github.io/openstack-vm-live-migration/ (here). I have setup the GlusterFS Volume and it is mounted at /var/lib/nova/instances.

root@cp-2:/users/Jasim9# gluster volume info

Volume Name: nova-gluster-vol
Type: Replicate
Volume ID: ae2baad8-0363-4391-8f24-86593648f7f9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: cp-1:/brick1/sdb
Brick2: cp-2:/brick1/sdb
Options Reconfigured:
performance.readdir-ahead: on

as can be seen in df on compute nodes:

root@cp-1:/users/Jasim9# df -hT
Filesystem                                     Type            Size  Used Avail Use% Mounted on
udev                                           devtmpfs        126G     0  126G   0% /dev
tmpfs                                          tmpfs            26G   34M   26G   1% /run
/dev/sda1                                      ext3             16G  2.9G   12G  20% /
tmpfs                                          tmpfs           126G     0  126G   0% /dev/shm
tmpfs                                          tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs                                          tmpfs           126G     0  126G   0% /sys/fs/cgroup
ops.clemson.cloudlab.us:/proj/live-migrate-PG0 nfs             100G  512K  100G   1% /proj/live-migrate-PG0
ops.clemson.cloudlab.us:/share                 nfs              97G  1.2G   88G   2% /share
tmpfs                                          tmpfs            26G     0   26G   0% /run/user/0
cgmfs                                          tmpfs           100K     0  100K   0% /run/cgmanager/fs
/dev/sda4                                      ext4            896G   73M  850G   1% /var/lib/nova
tmpfs                                          tmpfs            26G     0   26G   0% /run/user/20001
/dev/sdb                                       xfs             932G  865M  931G   1% /brick1
cp-1:/nova-gluster-vol                         fuse.glusterfs  932G  865M  931G   1% /var/lib/nova/instances

root@cp-2:/users/Jasim9# df -hT
Filesystem                                     Type            Size  Used Avail Use% Mounted on
udev                                           devtmpfs        126G     0  126G   0% /dev
tmpfs                                          tmpfs            26G   34M   26G   1% /run
/dev/sda1                                      ext3             16G  2.9G   12G  20% /
tmpfs                                          tmpfs           126G     0  126G   0% /dev/shm
tmpfs                                          tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs                                          tmpfs           126G     0  126G   0% /sys/fs/cgroup
ops.clemson.cloudlab.us:/proj/live-migrate-PG0 nfs             100G  512K  100G   1% /proj/live-migrate-PG0
ops.clemson.cloudlab.us:/share                 nfs              97G  1.2G   88G   2% /share
tmpfs                                          tmpfs            26G     0   26G   0% /run/user/0
cgmfs                                          tmpfs           100K     0  100K   0% /run/cgmanager/fs
/dev/sda4                                      ext4            896G   73M  850G   1% /var/lib/nova
tmpfs                                          tmpfs            26G     0   26G   0% /run/user/20001
/dev/sdb                                       xfs             932G  885M  931G   1% /brick1
cp-2:/nova-gluster-vol                         fuse.glusterfs  932G  885M  931G   1% /var/lib/nova/instances

I have added the necessary commands in /etc/libvirt/libvirtd.conf on both compute nodes:

listen_tls = 0
listen_tcp = 1
auth_tcp = “none”

and added

# options passed to libvirtd, add "-l" to listen on tcp
libvirtd_opts="-l"

to /etc/default/libvirt-bin and the flags in /etc/nova/nova.conf on both compute nodes:

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

Have created two VM instances (test2 on cp-1 test on cp-2):

root@ctl:/users/Jasim9# nova list
+--------------------------------------+-------+--------+------------+-------------+----------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                   |
+--------------------------------------+-------+--------+------------+-------------+----------------------------+
| a9e80a50-f849-44e9-9e04-2a5a89ed7f43 | test  | ACTIVE | -          | Running     | flat-lan-1-net=10.11.10.10 |
| b0de71ab-75df-4c21-9648-e69c47bd43e2 | test2 | ACTIVE | -          | Running     | flat-lan-1-net=10.11.10.11 |
+--------------------------------------+-------+--------+------------+-------------+----------------------------+

but when trying to live-migrate them to other host ... (more)

2018-05-21 16:56:52 -0500 received badge  Famous Question (source)
2018-05-21 16:51:26 -0500 asked a question stack.sh stuck on get_random_port

I am trying to install devstack on my Ubuntu 16.04 machine, here is my local.conf file:

[[local|localrc]]
HOST_IP=135.207.223.61
FLAT_INTERFACE=eth0
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=135.207.223.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=labstack
DATABASE_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
PIP_UPGRADE=True

When I run stack.sh it goes through a bunch of stuff but then it seems to get stuck here:

2018-05-21 21:31:40.228 | +lib/glance:configure_glance:229           cp -p /opt/stack/glance/etc/policy.json /etc/glance/policy.json
2018-05-21 21:31:40.237 | +lib/glance:configure_glance:230           cp -p /opt/stack/glance/etc/schema-image.json /etc/glance/schema-image.json
2018-05-21 21:31:40.247 | +lib/glance:configure_glance:232           cp -p /opt/stack/glance/etc/metadefs/cim-processor-allocation-setting-data.json /opt/stack/glance/etc/metadefs/cim-resource-allocation-setting-data.json /opt/stack/glance/etc/metadefs/cim-storage-allocation-setting-data.json /opt/stack/glance/etc/metadefs/cim-virtual-system-setting-data.json /opt/stack/glance/etc/metadefs/compute-aggr-disk-filter.json /opt/stack/glance/etc/metadefs/compute-aggr-iops-filter.json /opt/stack/glance/etc/metadefs/compute-aggr-num-instances.json /opt/stack/glance/etc/metadefs/compute-cpu-pinning.json /opt/stack/glance/etc/metadefs/compute-guest-memory-backing.json /opt/stack/glance/etc/metadefs/compute-guest-shutdown.json /opt/stack/glance/etc/metadefs/compute-host-capabilities.json /opt/stack/glance/etc/metadefs/compute-hypervisor.json /opt/stack/glance/etc/metadefs/compute-instance-data.json /opt/stack/glance/etc/metadefs/compute-libvirt-image.json /opt/stack/glance/etc/metadefs/compute-libvirt.json /opt/stack/glance/etc/metadefs/compute-quota.json /opt/stack/glance/etc/metadefs/compute-randomgen.json /opt/stack/glance/etc/metadefs/compute-trust.json /opt/stack/glance/etc/metadefs/compute-vcputopology.json /opt/stack/glance/etc/metadefs/compute-vmware-flavor.json /opt/stack/glance/etc/metadefs/compute-vmware.json /opt/stack/glance/etc/metadefs/compute-vmware-quota-flavor.json /opt/stack/glance/etc/metadefs/compute-watchdog.json /opt/stack/glance/etc/metadefs/compute-xenapi.json /opt/stack/glance/etc/metadefs/glance-common-image-props.json /opt/stack/glance/etc/metadefs/image-signature-verification.json /opt/stack/glance/etc/metadefs/operating-system.json /opt/stack/glance/etc/metadefs/software-databases.json /opt/stack/glance/etc/metadefs/software-runtimes.json /opt/stack/glance/etc/metadefs/software-webservers.json /opt/stack/glance/etc/metadefs/storage-volume-type.json /etc/glance/metadefs
2018-05-21 21:31:40.259 | +lib/glance:configure_glance:234           is_service_enabled tls-proxy
2018-05-21 21:31:40.302 | +functions-common:is_service_enabled:1911  return 1
2018-05-21 21:31:40.309 | +lib/glance:configure_glance:242           [[ uwsgi == \u\w\s\g\i ]]
2018-05-21 21:31:40.316 | +lib/glance:configure_glance:243           write_local_uwsgi_http_config /etc/glance/glance-uwsgi.ini /usr/local/bin/glance-wsgi-api /image
2018-05-21 21:31:40.323 | +lib/apache:write_local_uwsgi_http_config:301  local file=/etc/glance/glance-uwsgi.ini
2018-05-21 21:31:40.330 | +lib/apache:write_local_uwsgi_http_config:302  local wsgi=/usr/local/bin/glance-wsgi-api
2018-05-21 21:31:40.337 | +lib/apache:write_local_uwsgi_http_config:303  local url=/image
2018-05-21 21:31:40.347 | ++lib/apache:write_local_uwsgi_http_config:304  basename /usr/local/bin/glance-wsgi-api
2018-05-21 21:31:40.354 | +lib/apache:write_local_uwsgi_http_config:304  name=glance-wsgi-api
2018-05-21 21:31:40.361 | +lib/apache:write_local_uwsgi_http_config:310  rm -rf /etc/glance/glance-uwsgi.ini
2018-05-21 21:31:40.370 | +lib/apache:write_local_uwsgi_http_config:311  iniset ...
(more)
2018-04-18 07:00:24 -0500 received badge  Famous Question (source)
2018-03-27 18:39:08 -0500 received badge  Scholar (source)
2018-03-27 18:39:06 -0500 received badge  Supporter (source)
2018-03-27 14:55:49 -0500 received badge  Notable Question (source)
2018-03-26 16:00:04 -0500 commented answer What happens if you delete /var/lib/nova/instances/_base/* ?

The compute log contains the entries that your log contained. But what does this mean? When I tried this after dumping some data onto the instance, and deleting _base, and then doing a cold-migrate to another host; the data was preserved which I thought would not have.

2018-03-24 02:15:30 -0500 commented answer What happens if you delete /var/lib/nova/instances/_base/* ?

Interesting, after the example you gave I did the same thing to try out how it works on my system. I am using Mitaka release, and when I deleted the _base image and did openstack server reboot it did not give me an error, instead it just created/copied a new _base image from glance I assume.

2018-03-23 23:31:03 -0500 received badge  Popular Question (source)
2018-03-22 21:29:01 -0500 commented answer What happens if you delete /var/lib/nova/instances/_base/* ?

So in openstack if I wanted to disable caching of images for rebuild and migrate and force it to copy the image from glance everytime, what would I have to do instead of deleting contents of _base directory?

2018-03-22 16:25:57 -0500 asked a question What happens if you delete /var/lib/nova/instances/_base/* ?

I am trying to see the migration time and rebuild time differences when the images are not cached on the compute node, and I was wondering if it is safe to delete the contents of /var/lib/nova/instances/_base/ .

I tried doing it and it did not mess up the instance that was running on my cluster, and I rebooted the VM and it came back up just fine. And the image was not re-downloaded from glance to _base folder on the compute node.

Also, when exactly are the images in _base used ? (only for rebuild/migration ?)

2018-02-24 05:45:00 -0500 received badge  Student (source)
2018-02-23 01:13:50 -0500 received badge  Famous Question (source)
2018-02-22 15:23:35 -0500 received badge  Famous Question (source)
2018-02-19 23:13:13 -0500 received badge  Notable Question (source)
2018-02-19 17:44:11 -0500 received badge  Popular Question (source)
2018-02-19 15:04:19 -0500 commented answer nova migrate on shared storage

That does not answer my question, I am not dealing with live migration.

2018-02-17 20:29:41 -0500 asked a question nova migrate on shared storage

I am trying to find out if there is a difference between migrate when the underlying instances are on shared storage or not. This link https://openstack.nimeyo.com/91609/openstack-operators-nova-resize-on-shared-storage (https://openstack.nimeyo.com/91609/op...) stated that openstack should check for shared storage but in my experiments, I have not seen a difference in migration time.

I am using the Mitaka release with GlusterFS for shared storage. Is there a flag or some configuration option that needs to be set in nova.conf that tells migrate to use shared storage?

To be clear, I am not talking about live-migration, I am talking about cold migration (shut-off and restart).

2018-01-09 23:59:20 -0500 received badge  Famous Question (source)
2018-01-09 23:59:20 -0500 received badge  Notable Question (source)
2018-01-09 23:59:20 -0500 received badge  Popular Question (source)
2018-01-05 00:35:00 -0500 edited question Reset power state of instances

I was trying out concurrent live-migrations of instances in openstack, and the VMs got stuck in migrating state so I did a service restart of nova-compute and libvirt-bin on the compute node they were running on. So then they went into NOSTATE as below:

Jasim9@ctl:~/vnf-rehoming$ nova list
+--------------------------------------+------------+--------+------------+-------------+----------------------------------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                                                       |
+--------------------------------------+------------+--------+------------+-------------+----------------------------------------------------------------+
| f4046097-1132-4bf9-96b1-75fa6716de95 | Firewall   | ACTIVE | -          | NOSTATE     | mnic-2=10.10.2.2; mnic-3=10.10.3.3; flat-lan-1-net=10.11.10.12 |
| 1a2b194c-f43f-46d2-a32c-e50a286bb465 | Firewall_2 | ACTIVE | -          | NOSTATE     | mnic-2=10.10.2.4; mnic-3=10.10.3.4; flat-lan-1-net=10.11.10.14 |
| 048438ba-8e56-4fc4-adfb-37fdb5bd0849 | Firewall_3 | ACTIVE | -          | NOSTATE     | mnic-2=10.10.2.5; mnic-3=10.10.3.5; flat-lan-1-net=10.11.10.15 |
| c01a7da6-9a1f-4255-91a0-c7e034e1dc48 | Firewall_4 | ACTIVE | -          | NOSTATE     | mnic-2=10.10.2.6; mnic-3=10.10.3.6; flat-lan-1-net=10.11.10.16 |
| f4efc768-711d-4c7d-b539-1a641228c51c | Firewall_5 | ACTIVE | -          | NOSTATE     | mnic-2=10.10.2.7; mnic-3=10.10.3.7; flat-lan-1-net=10.11.10.17 |
| 13c5eeac-e180-4af7-bc79-117e638c2ea9 | Router2    | ACTIVE | -          | Running     | mnic-1=10.10.1.2; mnic-2=10.10.2.3; flat-lan-1-net=10.11.10.13 |
| 5f0702ab-cb61-4235-bd82-475becc55e66 | vClient    | ACTIVE | -          | Running     | mnic-1=10.10.1.3; flat-lan-1-net=10.11.10.10                   |
| 3ed2e93d-1e8f-40be-8dbf-7a1a7b3cdd32 | vServer    | ACTIVE | -          | Running     | mnic-3=10.10.3.2; flat-lan-1-net=10.11.10.11

tried reset-state --active and some other commands but cannot seem to get it to work. The VMs are actually running and I can ssh and everything. I just cannot run any nova commands on these as it gives error:

ERROR (Conflict): Cannot 'os-migrateLive' instance f4046097-1132-4bf9-96b1-75fa6716de95 while it is in power_state 0 (HTTP 409)

Update: Stopping and starting instances does solve the problem apparently, but I would prefer a way where I do not have to disrupt the instances (shutdown).