Ask Your Question

rooter's profile - activity

2014-12-04 14:31:17 -0500 received badge  Famous Question (source)
2014-11-27 03:46:41 -0500 received badge  Famous Question (source)
2014-10-30 15:45:55 -0500 edited answer PXE/TFTP boot VM Instance within OpenStack

As far as I know there's no way to force OpenStack to create a VM which simply PXE boots.

What we do is use a simple, custom, OpenStack image which contains a bootloader and an iPXE client. Works like a charm.

-- edit: we PXE boot from neutron's DHCP (to allow neutron to assign the IP to the VM, so that we don't have to worry about IP spoofing) and then via DHCP options we redirect the VMs to our own PXE/TFTP server. (Sorry, I don't have any code at hand to share).

2014-10-30 15:43:09 -0500 commented answer PXE/TFTP boot VM Instance within OpenStack

Alternatively, you can have your DHCP server be on the flat network (and be a physical node, not managed by neutron). Then you only have to take care of the IP spoofing protection (because by default iptables for VMs allows only traffic from the IP which has been assigned to it by Neutron)

2014-10-30 15:41:50 -0500 commented answer PXE/TFTP boot VM Instance within OpenStack

IceHouse blocks rouge DHCP servers. Basically you cannot start a VM and have it be a DHCP server. 1st, you have to patch the openstack code which creates the iptables rules, and have it not ban dhcp traffic. Then you need to disable openstack ip spoofing protection (or use allowed-address-pairs)

2014-08-31 00:37:23 -0500 received badge  Famous Question
2014-08-29 16:19:05 -0500 received badge  Notable Question
2014-08-29 08:30:53 -0500 received badge  Popular Question
2014-08-29 07:52:21 -0500 received badge  Student
2014-07-31 18:01:17 -0500 commented question neutron agents-list is empty!!

Do you have a single or multi node deployment? Check the logs on the nodes hosting the agents. is the message service (rabbit/qpid) working? That's how agents announce their presence to the neutron-server.

2014-07-30 03:15:29 -0500 received badge  Popular Question (source)
2014-07-30 03:15:29 -0500 received badge  Notable Question (source)
2014-07-29 12:21:09 -0500 received badge  Notable Question (source)
2014-07-29 12:15:09 -0500 received badge  Famous Question (source)
2014-07-29 05:34:22 -0500 received badge  Teacher (source)
2014-07-29 05:34:22 -0500 received badge  Necromancer (source)
2014-07-29 03:08:50 -0500 received badge  Notable Question (source)
2014-07-29 02:06:13 -0500 received badge  Popular Question (source)
2014-07-29 00:27:15 -0500 received badge  Popular Question (source)
2014-07-28 17:04:40 -0500 asked a question Running RabbitMQ behind HAProxy

What are the merits of deploying a cluster of RabbitMQ servers (e.g. with mirrored queues) behind a haproxy? It seems using HAProxy in front of RabbitMQ is Red Hat's way of doing things:

http://openstack.redhat.com/RabbitMQ

Does anyone have any experience with such setups? Is it actually worth doing, or should we drop haproxy and stickwith using "rabbit_hosts=..." in openstack config files?

Would you recommend setting up haproxy for rabbitmq in 'round-robin' mode, or 'backup'?

One could argue that using haproxy in round-robin evenly distributes the load among the haproxy servers, whereas using when rabbit_hosts all traffic tends to hit the first server on the list, and only goes to the second host once the first one drops dead. Is there any way (or need) to influence that?

Should we handle rabbitmq-server which were freshly started and did not yet had their queues synchronized in some special way? (What if a host connects to such a node to consume a message from a unsynchronized queue)?

We had a multi rabbitmq-server test deployment running, but could not tweak the timeouts in such a way to avoid getting "connection reset by peer" (connection to rabbitmq-server) errors in the log files of almost all OpenStack services every now and then.

Ideally we would like to use zeromq (due to its lack of single point of failure), but it seems it's not really properly supported in all OpenStack services yet (e.g. Icehouse Neutron has problems with it...)

2014-07-28 16:47:53 -0500 asked a question Tips for scaling out RabbitMQ

I'm looking for tips on how to maximize the performance and reliability of large OpenStack deployments based on RabbitMQ.

Firstly, how to maximize the performance of a single rabbitmq-server deployment?

Secondly, how to best scale large private clouds with multiple rabbitmq-servers running in a cluster (for scale and HA)? Which one do you prefer, drbd or mirrored-queues, why?

2014-07-28 16:41:41 -0500 received badge  Organizer (source)
2014-07-28 16:29:31 -0500 answered a question nova logs growing indefinitely

Use a tool for periodically rotating your log files, e.g. "logrotate":

http://linux.die.net/man/8/logrotate

2014-07-28 16:25:20 -0500 answered a question How HAproxy works with Openstack HEAT?

Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host:

[root@openstack1 ~]# keystone endpoint-list | grep 8004
| e41899cd971b437182f1be06ed98a129 | DefaultRegion |  http://haproxyhost:8004/v1/$(tenant_id)s  |         http://haproxyhost:8004/v1         |  http://haproxyhost:8004/v1/$(tenant_id)s  | 7648a4b19fc64cbdb60e23aa42fa369a |

Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking.

root@haproxyhost: cat /etc/haproxy/haproxy.cfg

global
  daemon

defaults
  mode http
  log 127.0.0.1:514 local4
  maxconn 10000
  timeout connect 4s
  timeout client 180s
  timeout server 180s
  option redispatch
  retries 3
  balance roundrobin

listen heatAPI
  bind 0.0.0.0:8004
  server heatnode1 heatnode1:8004     check inter 3s rise 2 fall 3
  server heatnode2 heatnode2:8004     check inter 3s rise 2 fall 3

or, run multiple heat-api on the some node, under different ports:

listen heatAPI
  bind 0.0.0.0:8004
  server heatnode1 heatnode1:8004      check inter 3s rise 2 fall 3
  server heatnode1 heatnode1:18004     check inter 3s rise 2 fall 3

note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.

2014-07-28 16:12:07 -0500 answered a question Do I need a separate database for each cinder instance?

No, you don't. One cinder database is all you need.

Although note, that cinder-volume processes do NOT connect to the database directly. Instead, they communicate with the cinder process running on the controller node (I believe with the cinder-scheduler), which then communicates with the database on their behalf. You typically need only one cinder-api(controller node), one cinder-scheduler (controller node), and multiple cinder-volume (storage node) processes running.

2014-07-28 15:59:35 -0500 answered a question keystone auth failure (is what it seems)

When nova-api receives a API request (e.g. server list), it validates the token passed to from the client it against keystone. But in order to do that, it has to first authenticate against keystone using it's own credentials 401 Unauthorised most likely means, that nova-api was unable to authenticate with keystone (and this is "unauthorised" to access the URL used for validating tokens).

In other words, this is most likely caused by credentials in /etc/nova/nova.conf of the node hosting the nova-api being out of sync with the actual credentials assigned to the nova user.

[root@openstack1 ~]# cat /etc/nova/nova.conf  | grep ^admin_
admin_user=nova
admin_password=zT8SsObhAHdqqgZPc
admin_tenant_name=service

Check those against what you think is set for the nova user in openstack.

You can easily confirm this theory. Issuing the following command (with XXXs replaced with whatever was returned with the command above) should work on a healthy OpenStack deployment. If it doesn't, this means nova's credentials are off somewhere (either in nova.conf or in keystone db).

curl -i -X POST http://10.0.25.2:5000/v2.0/tokens -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -d '{"auth": {"tenantName": "XXXX", "passwordCredentials": {"username": "XXX", "password": "XXXX"}}}'
2014-07-28 12:57:09 -0500 edited question The state of Oslo and ZeroMQ support

Hey guys,

What's up with ZeroMQ support in OpenStack?

It seems that ZeroMQ support in OpenStack, in general, is somewhat lacking. There are several open and not addressed ZeroMQ bugs. There's also no decent tutorials nor documentation on how to get it to work.

Clustered RabbitMQ (mirrored queues) is not as reliable as we would like it to be. It's also not really scalable. RabbitMQ with DRBD just adds more moving parts an doesn't really solve the problem of scalability. Qpid has no clustering support at all. Therefore, ZeroMQ-based messaging (on top of Oslo) seems to be the logical choice for large cloud deployments, due to its inherent scalability, superior speed, and no single point of failure.

Could we, as a community, focus a bit more on ZeroMQ integration, and actually make it production worthy? Are there any technical obstacles preventing us from doing so?

2014-07-28 12:57:09 -0500 received badge  Editor (source)
2014-07-28 11:57:12 -0500 received badge  Supporter (source)