Ask Your Question

SeanM's profile - activity

2018-03-22 10:42:46 -0600 received badge  Famous Question (source)
2017-10-18 02:26:57 -0600 commented answer Kolla images for Pike (with tag 5.0.0) don't exist on Docker hub registry

...and this pipeline will continue to be used for Queens and subsequent releases, meaning that we will not need to run our own build pipeline?

2017-09-06 05:10:09 -0600 received badge  Notable Question (source)
2017-09-04 02:07:47 -0600 received badge  Enthusiast
2017-09-01 23:54:35 -0600 received badge  Popular Question (source)
2017-09-01 03:56:49 -0600 asked a question Problem bringing up multiple interfaces on VM at launch

We have a problem with bringing up a VM with multiple network interfaces on our Ocata cluster.

The instance is configured to have two networks at launch - the idea is that a floating IP would be attached to one and the other would be an internal network which would be connected to some other of the other VMs. At launch time, the floating IP is not assigned, obviously.

The problem is that only one of the network interfaces comes up at launch. We see this in the boot logs:

[[0;32m  OK  [0m] Started Initial cloud-init job (pre-networking).
[[0;32m  OK  [0m] Reached target Network (Pre).
[[0;32m  OK  [0m] Started ifup for ens3.
         Starting Raise network interfaces...
[[0;32m  OK  [0m] Started Raise network interfaces.
         Starting Initial cloud-init job (metadata service crawler)...
[[0;32m  OK  [0m] Reached target Network.
[   25.495249] cloud-init[945]: Cloud-init v. 0.7.9 running 'init' at Thu, 31 Aug 2017 16:28:23 +0000. Up 7.47 seconds.
[   25.496760] cloud-init[945]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
[   25.498215] cloud-init[945]: ci-info: +--------+-------+------------------------------+---------------+-------+-------------------+
[   25.504145] cloud-init[945]: ci-info: | Device |   Up  |           Address            |      Mask     | Scope |     Hw-Address    |
[   25.505605] cloud-init[945]: ci-info: +--------+-------+------------------------------+---------------+-------+-------------------+
[   25.507004] cloud-init[945]: ci-info: |  ens4  | False |              .               |       .       |   .   | fa:16:3e:b2:97:35 |
[   25.512329] cloud-init[945]: ci-info: |  ens3  |  True |          10.0.0.12           | 255.255.255.0 |   .   | fa:16:3e:47:48:60 |
[   25.513748] cloud-init[945]: ci-info: |  ens3  |  True | fe80::f816:3eff:fe47:4860/64 |       .       |  link | fa:16:3e:47:48:60 |
[   25.516165] cloud-init[945]: ci-info: |   lo   |  True |          127.0.0.1           |   255.0.0.0   |   .   |         .         |
[   25.520162] cloud-init[945]: ci-info: |   lo   |  True |           ::1/128            |       .       |  host |         .         |
[   25.521604] cloud-init[945]: ci-info: +--------+-------+------------------------------+---------------+-------+-------------------+
[   25.524145] cloud-init[945]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++

ens4 is somehow visible, but does not come up.

I've checked taps, veths and linux bridges and OVS bridges and everything looks ok. I can log into the VM and bring up the second interface - add a couple of lines to the 050-cloud-init.cfg:

auto ens4
iface ens4 inet dhcp

and ifup ens4 and things work ok.

I don't know if this is by design, if it is an issue with cloud-init or an issue with the Openstack config (eg the second network interface does not appear in time). Also, there is the reasonably obvious issue that if two interfaces are brought up at launch, it's not specifically clear how routing should be configured.

Any idea what I would need to do to ensure both interfaces come up at launch?

2015-07-22 09:22:44 -0600 received badge  Famous Question (source)
2015-07-15 12:16:16 -0600 received badge  Popular Question (source)
2015-07-15 12:16:16 -0600 received badge  Notable Question (source)
2015-07-15 12:16:16 -0600 received badge  Famous Question (source)
2015-05-16 07:19:25 -0600 received badge  Notable Question (source)
2015-04-30 04:56:59 -0600 answered a question Why do I get rabbitmq broken pipe errors in my nova-api logs

Found the explanation for the error here:

https://bugs.launchpad.net/mos/+bug/1...

2015-04-30 03:40:34 -0600 received badge  Popular Question (source)
2015-04-30 03:21:08 -0600 received badge  Scholar (source)
2015-04-29 09:40:14 -0600 received badge  Editor (source)
2015-04-29 06:56:46 -0600 received badge  Student (source)
2015-04-29 06:53:35 -0600 received badge  Supporter (source)
2015-04-29 06:52:58 -0600 asked a question What are typical numbers of nova-api processes on a controller?

We are looking at the behaviour of our system. We have noticed some unexpected behaviour in RabbitMQ and this is causing us to look at nova-api.

In our system, we see that we have 73 nova-api processes, one parent and 72 child processes and we believe that this has been slowly but steadily increasing over the space of a couple of months. If we stop the service and restart it, we get the same number of processes.

Surprisingly, osapi_compute_workers in nova.conf is 24 - we would have expected that this meant that the number of nova-api processes would peak at around this value.

So, my questions are:

  • is 73 nova-api processes common/typical (yes, a bit of a 'how long is a piece of strong', I know)
  • should the osapi_compute_workers parameter control the number of nova-api processes

We are running Icehouse installed with Mirantis Fuel 5.1.

2015-04-24 11:49:45 -0600 commented question Why do I get rabbitmq broken pipe errors in my nova-api logs

And now I see that at a certain point of time 16 connections are closed (all at the same time) and then a few minutes later the 16 connections are opened again; then some time later (usually 1-2 hours) 16 connections are closed and a few mins later 16 connections are created again.

2015-04-24 11:49:45 -0600 commented question Why do I get rabbitmq broken pipe errors in my nova-api logs

More digging on this shows that the rabbitmq connections are closed due to heartbeat timeouts. The hearbeat timeouts for the rabbitmq process are configured to a default 580 seconds. However, it is not clear that the connection negotiation results in this timeout for the connection.

2015-04-24 11:49:45 -0600 asked a question Why do I get rabbitmq broken pipe errors in my nova-api logs

I note that in my nova-api logs, I get frequent broken pipe ERROR's when communicating with rabbitmq. The problem seems to be associated with a heartbeat mechansim and seems to occur about every couple of hours.

<179>Apr 23 06:51:55 node-1 nova-api 2015-04-23 08:51:55.203 3598 ERROR oslo.messaging._drivers.impl_rabbit [-] [Errno 32] Broken pipe
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 849, in connection_maintenance
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     conn.send_heartbeat()
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 585, in send_heartbeat
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     self.connection.heartbeat_check()
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 268, in heartbeat_check
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     return self.transport.heartbeat_check(self.connection, rate=rate)
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/transport/pyamqp.py", line 131, in heartbeat_check
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     return connection.heartbeat_tick(rate=rate)
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 903, in heartbeat_tick
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     self.send_heartbeat()
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 880, in send_heartbeat
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     self.transport.write_frame(8, 0, bytes())
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/transport.py", line 177, in write_frame
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     frame_type, channel, size, payload, 0xce,
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 307, in sendall
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     tail = self.send(data, flags)
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 293, in send
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit     total_sent += fd.send(data[total_sent:], flags)
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit error: [Errno 32] Broken pipe
2015-04-23 08:51:55.203 3598 TRACE oslo.messaging._drivers.impl_rabbit

The system appears to be generally working quite fine so the error does not appear to be causing any specific problems; however, we want to run a clean ship here ;-). It is an Icehouse install ... (more)