Ask Your Question's profile - activity

2020-04-05 22:06:26 -0500 received badge  Famous Question (source)
2018-05-16 08:54:33 -0500 received badge  Favorite Question (source)
2018-02-07 15:03:10 -0500 received badge  Popular Question (source)
2018-02-07 15:03:10 -0500 received badge  Famous Question (source)
2018-02-07 15:03:10 -0500 received badge  Notable Question (source)
2017-10-28 23:30:49 -0500 received badge  Notable Question (source)
2017-09-27 15:57:33 -0500 received badge  Student (source)
2017-09-27 15:57:32 -0500 received badge  Popular Question (source)
2017-09-11 15:02:33 -0500 asked a question add ironic to a running openstack (ocata)

Hi guys,

I have a openstack ocata installation running, with one controller node, one compute node and neutron with cinder in the controller node. I'm trying to add ironic to the running solution. But when I add ironic and configure the ironic driver in nova and controller controller nodes in order to integrate ironic with compute service I notice that the running compute hypervisor (QEMU) is replaced by another hypervisor instance of type (ironic). At this step only baremetal service is running perfectly but I can't deploy VMs on the previous running hypervisor.

I was followed this documentation in ocata version: ( Please, pay attention with the note, it says that the ironic driver must be configured in controller and compute nodes. (Is this true?)

Is there any configuration solution to have at the same time the QEMU hypervisor type running and the ironic hypervisors for the baremetal service? Or there are any guide that I can see as an example to follow the right way?

The target is deploy VMs and baremetal images with only one controller instance from a single openstack installation.

Any suggestion, correction or recomendation is appreciated.

Thanks friends! :)

2017-08-27 00:26:26 -0500 asked a question nova slow api calls

Hi all, I am playing with openstack ocata, and from the first moment I was feel the interface horizon and the api calls very slow (10-20 secs waiting for a call). This openstack has only 56 vms registered and stopped, it is a small installation. The controller server is physicall with 8 cores and 16GB of ram (12GB free) and installed with CentOS 7.3.

I have configured memcached with keystone and I was troubleshooting the mariadb slowqueries (no sql queries > 30ms)

When i launch the command "time openstack server list --timing" I can see:

| URL                                            |  Seconds |
| GET http://controller:35357/v3/                | 0.019095 |
| POST http://controller:35357/v3/auth/tokens    | 0.224442 |
| POST http://controller:35357/v3/auth/tokens    | 0.225134 |
| GET http://controller:8774/v2.1/servers/detail | 4.503286 |
| Total                                          | 4.971957 |

real    0m14.398s
user    0m8.742s
sys 0m0.461s

Where is the extra time (10sec aprox) that the command tooks in its execution? Would someone help to point me in the right direction to troubleshoot this issue?

Warm regards!