Ask Your Question

radclif's profile - activity

2014-11-25 16:34:15 -0500 received badge  Famous Question (source)
2014-09-30 02:45:28 -0500 received badge  Famous Question (source)
2014-06-26 15:37:51 -0500 received badge  Notable Question (source)
2014-06-10 13:30:06 -0500 received badge  Popular Question (source)
2014-06-10 05:04:37 -0500 commented question Removing deleted instances form nova database, is it safe

Thanks Shankar, could you show the sql you used to achive this I am having trouble with forgin key constaints in the block_device_mapping table

2014-06-10 04:18:30 -0500 asked a question Removing deleted instances form nova database, is it safe

I have been load testing a new openstack cluster by creating lagre numbers of instances running tests then deleteing them. When I check the the nova.instances table I see that all deleted intances are still present. The deleted instances are all maked as deleted which is fine but I would like to know if its possible to deleted them from the database records with out causing a problem. I presume they are kept in the instances table for auditing or reference but I have no need for this and would like to keep the table size down.

Just need to know if its safe to manually delete them and if there is a seeting to automatically remove instances from the table when they are deleted


2014-05-12 09:25:29 -0500 received badge  Notable Question (source)
2014-05-08 05:40:08 -0500 received badge  Popular Question (source)
2014-05-08 04:23:17 -0500 commented answer thin provisioned root disks

I understand that an allocation ratio of 1:1 would mean more or less that no over provisioning would take place. however does a ratio of 1.0 mean unlimited disk allocation. I noticed as well that vcpu allocation defaults to 16:0 could you clear up how this ratio is worked out? or link an explination

2014-05-08 04:18:04 -0500 received badge  Scholar (source)
2014-05-07 14:26:18 -0500 asked a question thin provisioned root disks

Hi In our current setup we provision all vm instance disks thin. we do not use cinder or swift, and all disk images they are located under /var/lib/nova/instnaces

whilst this is good as it saves space I am now starting to worry about the possiblity of over commiting disk space on compute nodes.

How is this handled by Openstack does it keep track of these space claimed by each disk or the space actually used?

Also will openstack stop you from creating new instances if you exceed the theoretical limit (i.e there may be free disk space on the compute node but openstack will not create an instance because if all vm disks where to be filled up there would not be enough space)?

2014-04-25 15:29:12 -0500 received badge  Enthusiast
2014-04-21 14:04:01 -0500 received badge  Famous Question (source)
2014-04-19 11:20:36 -0500 commented answer dynamic dns

If I have read that correctly, there is no way in openstack to override what neutron writes to the dnsmasq host file, without hacking the source code? This seems very strange. How do people normally deal with this issue. DNS as a service as I understand it is not available yet in a release.

2014-04-17 09:06:46 -0500 received badge  Student (source)
2014-04-17 09:00:53 -0500 received badge  Notable Question (source)
2014-04-17 05:28:34 -0500 received badge  Popular Question (source)
2014-04-16 15:35:38 -0500 asked a question dynamic dns

Hi guys, Running openstack havana in a multi node testing setup, instances are getting created and allocated ip address fine, and host names are being assigned to the nodes based on the instance name given in horizon when creating the instance. example of hostname on vm:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:ae:00:4f brd ff:ff:ff:ff:ff:ff
inet brd scope global eth0
inet6 fe80::f816:3eff:feae:4f/64 scope link tentative dadfailed 
   valid_lft forever preferred_lft forever
[root@vm-5 ~]# hostname

However I would like to be able to look up these host names dynamically without having to configure dnsmasq manually. I notice in my /var/lib/neutron/dhcp/<network-namespace-id>/host file the host names are listed but they are different to the ones set on the vm:


this network name space has a dnsmasq instance running in it at ip and i can nslookup the host names based on the ipaddress by querying it

Address:   name =

Is there any way I can get dnsmasq to set the host name given to the vm in its host file? If I could do this then I could just point a forwarding dns server at address and all vms would be able to be looked up by there actual host name.

2014-04-08 21:41:24 -0500 received badge  Famous Question (source)
2014-03-31 01:32:17 -0500 received badge  Notable Question (source)
2014-03-31 00:33:29 -0500 received badge  Popular Question (source)
2014-03-23 15:22:49 -0500 asked a question Havana Neutron flat dhcp network

Hi, I'm trying to set up a POC for a private cloud, I have currently openstack running on one node (but a further two to nodes are set aside as compute nodes if I can get openstack working on one node). I can bring up vms and have connectivity between them. However they are not accessible from the outside world. To elaborate the controller/compute/network node has the following network setup (image includes two compute nodes however they are not in use at the moment, sorry for the basic diagram)

|----------------|        eth0
|  office net    |                 |-------------|
|----------------| ----------------| gateway     | eth1
        |                          | server      |------
        |                          |-------------|     |
        | eth0                         |
|----------------|                              |-------------------|
|  controller    |------------------------------| unmanaged         |
|----------------| eth1 192.168.200./24         | switch            | 
                                                |-------------------| ------------|
                                                        |                         |
                                                        | eth1                    | eth1
vm1                     |-----------------|                     |-----------------|
dhcp                    | compute1        |                     | compute2        |
vm2                     |-----------------|                     |-----------------| 
                                        eth0                      eth0

The vms are all on the controller node and I can ping between them how ever I cannot ping from the controller node to the vm's. I think I am missing something pretty basic. Also i cannot ping from the gateway server (just a basic centos install, with ip forwarding) to the vms I imagine that the networking should behave in the following manor. br-int is used for vms to create interfaces on br-tun is used to connect between hypervisors and br-ext is used to provide connectivity from the outside world into the vm network I assigned eth1 to br-ext as I guessed all vms will need to communicate over the unmanaged switch when I have additional compute nodes. And to get to the gateway server

I really only need one subnet for the testing and I decided to use gre networking to do this as someone suggested it as an easy way to set up a flat provider network. I really don't mind how this is achieved so if vlan is better than please say so.

so configuration wise I went with: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =
polling_interval = 2
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

to be honest I didn't know what I should have put for the local_ip option so this could be one of the problems. I guest it just needed an ip endpoint to set up the tunnel. /etc/neutron/dhcp_agent.ini

debug = False
resync_interval = 5
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = True
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_domain = openstacklocal
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
dnsmasq_lease_max = 16777216


debug = False
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5


debug = False
verbose = False
state_path = /var/lib/neutron
lock_path = $state_path/lock
log_dir = /var/log/neutron
bind_host =
bind_port = 9696
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
api_paste_config ...