Ask Your Question

yee379's profile - activity

2018-08-02 08:02:48 -0600 received badge  Famous Question (source)
2018-04-24 05:28:30 -0600 received badge  Famous Question (source)
2018-04-20 05:28:11 -0600 received badge  Nice Question (source)
2017-03-29 04:16:18 -0600 received badge  Famous Question (source)
2017-03-29 04:16:18 -0600 received badge  Notable Question (source)
2017-02-06 07:56:57 -0600 received badge  Popular Question (source)
2017-02-06 07:56:57 -0600 received badge  Notable Question (source)
2017-02-06 07:56:57 -0600 received badge  Famous Question (source)
2016-12-14 08:09:06 -0600 received badge  Notable Question (source)
2016-12-08 17:46:32 -0600 received badge  Famous Question (source)
2016-10-28 08:29:56 -0600 received badge  Famous Question (source)
2016-09-29 05:10:01 -0600 received badge  Famous Question (source)
2016-09-17 15:30:23 -0600 received badge  Famous Question (source)
2016-09-16 03:33:05 -0600 received badge  Famous Question (source)
2016-09-15 11:40:11 -0600 received badge  Notable Question (source)
2016-09-15 11:40:11 -0600 received badge  Famous Question (source)
2016-09-13 18:36:08 -0600 received badge  Notable Question (source)
2016-09-12 11:27:41 -0600 received badge  Notable Question (source)
2016-09-09 05:47:39 -0600 received badge  Popular Question (source)
2016-09-08 18:37:22 -0600 asked a question kolla multinode deployment issue with http+docker://localunixsocket/v1.24/images/create

i'm attempting a kolla v1.1.2 (liberty/centos/binary) multinode deployment: however, i get this:

TASK: [common | Starting heka container] **************************************
failed: [neb0010.slac.stanford.edu] => {"changed": true, "failed": true}
msg: APIError(HTTPError(u'500 Server Error: Internal Server Error for url: http+docker://localunixsocket/v1.24/images/create?tag=1.1.2&fromImage=myregistrydomain.com%3A4000%2FSLAC-OCIO%2Fcentos-binary-heka',),)
failed: [neb0009.slac.stanford.edu] => {"changed": true, "failed": true}
msg: APIError(HTTPError(u'500 Server Error: Internal Server Error for url: http+docker://localunixsocket/v1.24/images/create?tag=1.1.2&fromImage=myregistrydomain.com%3A4000%2FSLAC-OCIO%2Fcentos-binary-heka',),)
failed: [neb0008.slac.stanford.edu] => {"changed": true, "failed": true}
msg: APIError(HTTPError(u'500 Server Error: Internal Server Error for url: http+docker://localunixsocket/v1.24/images/create?tag=1.1.2&fromImage=myregistrydomain.com%3A4000%2FSLAC-OCIO%2Fcentos-binary-heka',),)

FATAL: all hosts have already failed -- aborting

i've set up my registry with a self-signed cert, and i can push images to it with kolla-build; i can also manually pull them with docker after adding

# cat /etc/sysconfig/docker
INSECURE_REGISTRY="--insecure-registry myregistrydomain.com:4000"

# cat /etc/hosts
172.23.67.28 myregistrydomain.com

# copy over certs
scp certs/domain.crt <NODE>:/etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
update-ca-trust # on nodes

any help welcome!

2016-08-31 11:38:06 -0600 received badge  Notable Question (source)
2016-08-31 07:41:37 -0600 received badge  Notable Question (source)
2016-08-31 00:29:58 -0600 received badge  Popular Question (source)
2016-08-31 00:29:23 -0600 received badge  Popular Question (source)
2016-08-30 19:10:43 -0600 asked a question kolla liberty (1.1.2) install fail

i'm attempting an all-in-one kolla install with openstack_release: "1.1.2".

however, with the kolla-ansible deploy, i get:

TASK [keystone : Creating admin project, user, role, service, and endpoint] ****
fatal: [localhost]: FAILED! => {"failed": true, "msg": "The conditional check '{{ (keystone_bootstrap.stdout | from_json).changed }}' failed. The error was: No JSON object could be decoded"}

any suggestions?

2016-08-30 16:33:12 -0600 asked a question kolla all-in-one failed with openvswitch-db container

i'm following the quickstart guide for kolla for the all-in-one. i edit globals.yaml to include openstack_release: "2.0.2" and when i attempt a kolla-ansible deploy i see:

TASK [neutron : Starting openvswitch-db container] *****************************
fatal: [localhost]: FAILED! => {"changed": true, "failed": true, "msg": "'Traceback (most recent call last):\\n  File \"/tmp/ansible_jCAtYN/ansible_module_kolla_docker.py\", line 737, in main\\n    result = bool(getattr(dw, module.params.get(\\'action\\'))())\\n  File \"/tmp/ansible_jCAtYN/ansible_module_kolla_docker.py\", line 569, in start_container\\n    self.dc.start(container=self.params.get(\\'name\\'))\\n  File \"/usr/lib/python2.7/site-packages/docker/utils/decorators.py\", line 21, in wrapped\\n    return f(self, resource_id, *args, **kwargs)\\n  File \"/usr/lib/python2.7/site-packages/docker/api/container.py\", line 363, in start\\n    self._raise_for_status(res)\\n  File \"/usr/lib/python2.7/site-packages/docker/client.py\", line 146, in _raise_for_status\\n    raise errors.APIError(e, response, explanation=explanation)\\nAPIError: 500 Server Error: Internal Server Error (\"{\"message\":\"Container 4d431e8fa6128a72c2854941d30e941e560dea5586b8b85c769b66015d883021 is already active\"}\")\\n'"}

looking at docker ps shows:

CONTAINER ID        IMAGE                                             COMMAND                 CREATED             STATUS                      PORTS               NAMES
4d431e8fa612        kolla/centos-binary-openvswitch-db-server:2.0.2   "kolla_start"           20 hours ago        Exited (127) 20 hours ago                       openvswitch_db

and docker logs:

INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
INFO:__main__:Validating config file
INFO:__main__:Copying service configuration files
INFO:__main__:Writing out command to execute
Running command: 'start-ovsdb-server 134.79.196.85 '
/usr/local/bin/kolla_start: line 24: exec: start-ovsdb-server: not found
INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
INFO:__main__:Validating config file
INFO:__main__:Copying service configuration files
INFO:__main__:Writing out command to execute
Running command: 'start-ovsdb-server 134.79.196.85 '
/usr/local/bin/kolla_start: line 24: exec: start-ovsdb-server: not found

any ideas?

2016-08-18 12:18:43 -0600 received badge  Notable Question (source)
2016-08-18 12:18:43 -0600 received badge  Popular Question (source)
2016-08-18 12:18:43 -0600 received badge  Famous Question (source)
2016-08-17 09:20:34 -0600 received badge  Popular Question (source)
2016-08-16 01:44:48 -0600 asked a question vm instance stuck in migrating status

I have live block migrations working in my RDO-Liberty deployment. I do not have shared storage and use the local disks for /var/lib/nova.

As i have some rather large (disk) instances, i set live_migration_progress_timeout to 0.

unfortuantely, this seems to have been a bad idea, as i was manually live migrating a couple of instances off a hypervisor at the same time... for other instances, if i did it one at a time, it took about 5 minutes; for these... well... one of them just gave up after about ~20 mins, and the other one is stuck in MIGRATING status (as per nova) - and has been for about 6 hours now.

is there a way i can just cancel the migration?

2016-08-12 20:05:37 -0600 commented answer live-migration unable to execute QEMU command 'migrate'

i pull the repo from http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm and did an update.

2016-08-11 16:47:48 -0600 asked a question can i have a new compute node come up as disabled by default?

when i build a new compute node, i would like the openstack services to have the new hypervisor come up with the compute service disabled such that new vm's don't land there; ie i would like to enable it manually the first time it comes up.

is this possible?

2016-08-02 00:06:08 -0600 received badge  Famous Question (source)
2016-07-27 07:48:43 -0600 received badge  Famous Question (source)
2016-07-10 13:12:03 -0600 received badge  Popular Question (source)
2016-07-10 13:12:03 -0600 received badge  Notable Question (source)
2016-07-04 00:16:41 -0600 received badge  Notable Question (source)
2016-07-03 05:33:58 -0600 received badge  Popular Question (source)
2016-07-03 00:04:32 -0600 received badge  Popular Question (source)
2016-07-01 17:25:52 -0600 asked a question live-migration unable to execute QEMU command 'migrate'

i have set up my CentOS7+RDO Liberty environment and everything works fine. I would like to get non-shared block storage working; so running:

nova --insecure live-migration --block-migrate  9cc0eaa5-f14b-4ed1-86d3-cfd2453958ce neb0007

results in... nothing! but looking through the nova-compute logs i see:

Live Migration failure: internal error: unable to execute QEMU command 'migrate': this feature or command is not currently supported

my/etc/nova/nova.conf contains:

[libvirt]
virt_type=kvm
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE

my /etc/libvirt/libvirtd.conf

listen_tcp=1
tcp_port="16509"
listen_addr="0.0.0.0"
listen_tls=0
auth_tcp="none"

my /etc/sysconfig/libvirtd:

LIBVIRTD_ARGS="--listen"

my rpms:

# rpm -qa | grep libvirt
libvirt-client-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64

any pointers?

2016-07-01 15:36:37 -0600 received badge  Notable Question (source)
2016-06-26 02:08:04 -0600 received badge  Famous Question (source)
2016-06-22 00:57:57 -0600 received badge  Commentator
2016-06-22 00:57:57 -0600 commented answer Cannot update service status on host ... since it is not registered

had the same problem with rdo: controller had openstack-nova-conductor-12.0.1-1.el7.noarch whilst new compute installs were using openstack-nova-compute-12.0.4-1.el7.noarch. just updated the controller and restarted the service and voila! all good again :)