Ask Your Question

radclif's profile - activity

2014-11-25 16:34:15 -0500 received badge  Famous Question (source)
2014-09-30 02:45:28 -0500 received badge  Famous Question (source)
2014-06-26 15:37:51 -0500 received badge  Notable Question (source)
2014-06-10 13:30:06 -0500 received badge  Popular Question (source)
2014-06-10 05:04:37 -0500 commented question Removing deleted instances form nova database, is it safe

Thanks Shankar, could you show the sql you used to achive this I am having trouble with forgin key constaints in the block_device_mapping table

2014-06-10 04:18:30 -0500 asked a question Removing deleted instances form nova database, is it safe

I have been load testing a new openstack cluster by creating lagre numbers of instances running tests then deleteing them. When I check the the nova.instances table I see that all deleted intances are still present. The deleted instances are all maked as deleted which is fine but I would like to know if its possible to deleted them from the database records with out causing a problem. I presume they are kept in the instances table for auditing or reference but I have no need for this and would like to keep the table size down.

Just need to know if its safe to manually delete them and if there is a seeting to automatically remove instances from the table when they are deleted

Thanks

2014-05-12 09:25:29 -0500 received badge  Notable Question (source)
2014-05-08 05:40:08 -0500 received badge  Popular Question (source)
2014-05-08 04:23:17 -0500 commented answer thin provisioned root disks

I understand that an allocation ratio of 1:1 would mean more or less that no over provisioning would take place. however does a ratio of 1.0 mean unlimited disk allocation. I noticed as well that vcpu allocation defaults to 16:0 could you clear up how this ratio is worked out? or link an explination

2014-05-08 04:18:04 -0500 received badge  Scholar (source)
2014-05-07 14:26:18 -0500 asked a question thin provisioned root disks

Hi In our current setup we provision all vm instance disks thin. we do not use cinder or swift, and all disk images they are located under /var/lib/nova/instnaces

whilst this is good as it saves space I am now starting to worry about the possiblity of over commiting disk space on compute nodes.

How is this handled by Openstack does it keep track of these space claimed by each disk or the space actually used?

Also will openstack stop you from creating new instances if you exceed the theoretical limit (i.e there may be free disk space on the compute node but openstack will not create an instance because if all vm disks where to be filled up there would not be enough space)?

2014-04-25 15:29:12 -0500 received badge  Enthusiast
2014-04-21 14:04:01 -0500 received badge  Famous Question (source)
2014-04-19 11:20:36 -0500 commented answer dynamic dns

If I have read that correctly, there is no way in openstack to override what neutron writes to the dnsmasq host file, without hacking the source code? This seems very strange. How do people normally deal with this issue. DNS as a service as I understand it is not available yet in a release.

2014-04-17 09:06:46 -0500 received badge  Student (source)
2014-04-17 09:00:53 -0500 received badge  Notable Question (source)
2014-04-17 05:28:34 -0500 received badge  Popular Question (source)
2014-04-16 15:35:38 -0500 asked a question dynamic dns

Hi guys, Running openstack havana in a multi node testing setup, instances are getting created and allocated ip address fine, and host names are being assigned to the nodes based on the instance name given in horizon when creating the instance. example of hostname on vm:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:ae:00:4f brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 172.16.214.255 scope global eth0
inet6 fe80::f816:3eff:feae:4f/64 scope link tentative dadfailed 
   valid_lft forever preferred_lft forever
[root@vm-5 ~]# hostname
vm-5

However I would like to be able to look up these host names dynamically without having to configure dnsmasq manually. I notice in my /var/lib/neutron/dhcp/<network-namespace-id>/host file the host names are listed but they are different to the ones set on the vm:

<mac>,host-192-168-0-5.testvm,192.168.0.5
<mac>,host-192-168-0-6.testvm,192.168.0.6
<mac>,host-192-168-0-7.testvm,192.168.0.7
<mac>,host-192-168-0-8.testvm,192.168.0.8
<mac>,host-192-168-0-11.testvm,192.168.0.11

this network name space has a dnsmasq instance running in it at ip 192.168.0.5 and i can nslookup the host names based on the ipaddress by querying it

nslookup 192.168.0.11 192.168.0.5
Server:     192.168.0.5
Address:    192.168.0.5#53

11.0.168.192.in-addr.arpa   name = host-192-168-0-11.testvm.my.net.

Is there any way I can get dnsmasq to set the host name given to the vm in its host file? If I could do this then I could just point a forwarding dns server at 192.168.0.5 address and all vms would be able to be looked up by there actual host name.

2014-04-08 21:41:24 -0500 received badge  Famous Question (source)
2014-03-31 01:32:17 -0500 received badge  Notable Question (source)
2014-03-31 00:33:29 -0500 received badge  Popular Question (source)
2014-03-23 15:22:49 -0500 asked a question Havana Neutron flat dhcp network

Hi, I'm trying to set up a POC for a private cloud, I have currently openstack running on one node (but a further two to nodes are set aside as compute nodes if I can get openstack working on one node). I can bring up vms and have connectivity between them. However they are not accessible from the outside world. To elaborate the controller/compute/network node has the following network setup (image includes two compute nodes however they are not in use at the moment, sorry for the basic diagram)

|----------------|        eth0 192.168.100.169
|  office net    |                 |-------------|
|----------------| ----------------| gateway     | eth1 192.168.200.1
        |                          | server      |------
        |                          |-------------|     |
        | eth0 192.168.100.229                         |
|----------------|                              |-------------------|
|  controller    |------------------------------| unmanaged         |
|----------------| eth1 192.168.200./24         | switch            | 
                                                |-------------------| ------------|
                                                        |                         |
                                                        | eth1                    | eth1
vm1 192.168.200.2                     |-----------------|                     |-----------------|
dhcp 192.168.200.3                    | compute1        |                     | compute2        |
vm2 192.168.200.4                     |-----------------|                     |-----------------| 
                                        eth0 192.168.100.30                      eth0 192.168.100.31

The vms are all on the controller node and I can ping between them how ever I cannot ping from the controller node to the vm's. I think I am missing something pretty basic. Also i cannot ping from the gateway server (just a basic centos install, with ip forwarding) to the vms I imagine that the networking should behave in the following manor. br-int is used for vms to create interfaces on br-tun is used to connect between hypervisors and br-ext is used to provide connectivity from the outside world into the vm network I assigned eth1 to br-ext as I guessed all vms will need to communicate over the unmanaged switch when I have additional compute nodes. And to get to the gateway server

I really only need one subnet for the testing and I decided to use gre networking to do this as someone suggested it as an easy way to set up a flat provider network. I really don't mind how this is achieved so if vlan is better than please say so.

so configuration wise I went with: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

    [OVS]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.200.254
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

to be honest I didn't know what I should have put for the local_ip option so this could be one of the problems. I guest it just needed an ip endpoint to set up the tunnel. /etc/neutron/dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 5
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = True
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_domain = openstacklocal
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
dnsmasq_lease_max = 16777216

l3_agent.ini

[DEFAULT]
debug = False
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5

/etc/neutron/neutron.conf

[DEFAULT]
debug = False
verbose = False
state_path = /var/lib/neutron
lock_path = $state_path/lock
log_dir = /var/log/neutron
bind_host = 192.168.100.229
bind_port = 9696
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
api_paste_config ...
(more)