Ask Your Question

edmv's profile - activity

2014-12-04 10:06:11 -0600 received badge  Good Question (source)
2014-10-07 03:54:40 -0600 received badge  Famous Question (source)
2014-10-07 03:54:40 -0600 received badge  Notable Question (source)
2014-10-07 03:54:40 -0600 received badge  Popular Question (source)
2014-09-27 12:42:00 -0600 answered a question Allow Direct Routing between External Network and Tenant Network

We used to do it, althouth later we saw no benefit over floating IPs NAT and discarded the use of static routes, but what we did was configure a static route on our firewall (route-eth1) with the following: via dev eth1

The 10.x.x.x is a tenant network, in openstack we defined our company's network ( as a public network and interconnected the two using a neutron router. The static route uses the IP address ( of the port the neutron router had taken from our public network.

2014-09-27 12:28:58 -0600 answered a question arp requests from compute nodes bogging network

This happened to me once, I think it happened after reconfiguring some of the neutron agents, don't remember exactly but in any case the problem was solved by restarting openvswitch and neutron-openvswitch-agent on the machines generating the traffic

2014-09-14 11:43:38 -0600 received badge  Nice Question (source)
2014-02-19 01:10:33 -0600 received badge  Popular Question (source)
2014-02-18 11:51:42 -0600 answered a question Add again a node that failed and had to be reinstalled?

For anoyone wondering the same, I was able to test with success.

I put my controller and 4 of my nodes in the EXCLUDE_SERVERS, and only put the IP of the node that had to be reinstalled in CONFIG_NOVA_COMPUTE_HOSTS, ran packstack --answer-file=myfile.cfg and the host was reconfigured againg without issues.

Thank you all

2014-02-16 13:13:31 -0600 commented question Add again a node that failed and had to be reinstalled?

Yeah, I was thinking in using it as a failsafe, it won't be until torromow Monday that I'm gonna be able to test. If there is another method though do let me know.

2014-02-15 13:59:08 -0600 received badge  Notable Question (source)
2014-02-15 13:59:08 -0600 received badge  Famous Question (source)
2014-02-15 12:43:49 -0600 asked a question Add again a node that failed and had to be reinstalled?

Hi there, I had a 6 server setup with rdo, 1 controller and 5 nodes, the controller being also the network controller.

One compute node failed and had to reinstall the operating system again.

My setup uses gluster for cinder and for nova (for live migration) but that's another issue, what I would like to know is exactly how to "re-add" this failed node.

I'm working on the assumption that if I re-run packstack from the controller, the puppet modules will just notice the missing packages/services from the failed node and reinstall things "on that node only" effectively re-adding the node as it was before, is my assumption correct or is there another method? Piece of warning, the openstack we have, althouth in testing, is being currently used for several things so I don't want the complete stack going down just for an incorrect procedure adding a node.

Thanks in advance!

2013-12-17 00:06:33 -0600 received badge  Popular Question (source)
2013-12-14 08:16:44 -0600 commented answer How is the implementation of native glusterfs in cinder?

thanks a ton! I guess it make sense since creating an instance whos image comes from glance doesn't run on cinder, only volume and snapshots. Do you know if this is actually described in any of the docs? I mean, the "only when booting an instance from a volume" part. Thanks again

2013-12-14 08:08:50 -0600 received badge  Supporter (source)
2013-12-12 23:59:18 -0600 received badge  Nice Question (source)
2013-12-11 23:55:09 -0600 received badge  Student (source)
2013-12-11 21:03:59 -0600 asked a question How is the implementation of native glusterfs in cinder?

Hello there,

I've set up a multi-node environment of openstack using packstack for the company I work for, the main idea behind it is to test it and see the benefit of it for our internal IT operations.

The multi-node setup is made up of 4 servers, 1 that acts as the controller, database, message broker, neutron server, image server and storage, and 3 nodes running nova-compute services and openvswitch agents.

All the hosts are fedora 19 up to date because we wanted to test glusterfs native integration with cinder (libgfapi)

Install/deploy method was packstack and went without issues.

Currently there's only one cinder volume created without any particularities, basically gluster vol create GlusterCinder0 replica 3 vm1:/gluster/vol0 vm2:/gluster/vol0 vm3:/gluster/vol0

The packstack answer file was configured to use gluster and the gluster volume address

Everything went just fine with one exception, I don't see the native implementation in play

After googling I read that I needed to activate the feature by editing qemu_allowed_storage_drivers in nova.conf which I did (restarted cinder-volume and nova services and even all the hosts)

I checked the cinder_shares files which seems ok and currently cinder+glusterfs is working (I can create volumes, attach them to instances, delete volumes, etc...) what seems not to be working is the "native" implementation, by that I mean: I execute "mount" on all the servers and all the mount points used by cinder show fuse.glusterfs as mount type

I can create images directly in the gluster volume using qemu-img create and using native gluster backend (e.g: gluster://ip/vol/image size) so I know that on libvirt side everything seems to work as expected (qemu is version 1.4+, gluster is 3.4.1)

Could you point me to what could I be missing on openstack's side? Maybe I'm not seeing where I should or fuse.glusterfs as mount type is somehow normal, or some better test I could use to verify gluster usage by cinder... anything will be appreciated.

That's all, thanks for reading and sorry for the long post, just wanted to made it clear.