sph's profile - activity

2014-06-16 14:45:39 -0500 received badge  Nice Question (source)
2013-11-26 03:18:50 -0500 received badge  Necromancer (source)
2013-10-23 22:02:35 -0500 received badge  Necromancer (source)
2013-10-18 19:31:43 -0500 received badge  Taxonomist
2013-10-11 00:30:57 -0500 commented answer Grizzly sandbox: VM does not get DHCP IP

Did the same. These are the logs for network - http://pastebin.com/3G5CBDZJ and these for compute - http://pastebin.com/wsJhVEyk. To fix "cannot create port gre-2" err on nw node I followed http://goo.gl/2u0YJS and recreated bridges on nw node but still no change in the situation

2013-10-08 09:45:30 -0500 received badge  Nice Answer (source)
2013-10-07 00:34:24 -0500 answered a question How do I solve error NotAuthenticated when trying to upload image to Glance

I was facing the same problem. I was using keystone as the authentication mechanism. But my glance-api.conf and glance-registry.conf contained "flavor = caching". I changed that to "flavor = keystone" and restarted glance-registry and glance-api services and I could create an image / lookup already created images.

2013-10-06 23:38:19 -0500 commented answer Grizzly sandbox: VM does not get DHCP IP

Following are the network node logs: http://pastebin.com/CPTprx9K I don't see any errors on the network node thought. Are there any additional debugging steps I can do? Thanks for your time.

2013-09-30 08:35:00 -0500 commented answer Grizzly sandbox: VM does not get DHCP IP

Following is the link for ovs-agent.log, dhcp-agent.log, and l3-agent.log from compute node: http://pastebin.com/syDf6Xj5 There are errors in ovs-agent.log which are similar to those mentioned in this link - http://goo.gl/XdlO4 But they don't lead me to any answer as I haven't played with ovs before.

2013-09-30 00:20:13 -0500 received badge  Famous Question (source)
2013-09-26 08:47:29 -0500 answered a question Grizzly sandbox: VM does not get DHCP IP

Hi,

Thanks for the update. I ran the command on the controller and got the following output:

+--------------------------------------+--------------------+----------------------+-------+----------------+
| id       |agent_type         | host                 | alive | admin_state_up |
+--------------------------------------+--------------------+----------------------+-------+----------------+
| a3f053d0-51a9-4c9f-8551-3d236379158b | DHCP agent         | openstack-network-vm | xxx   | True           |
| 64889f49-7b59-4a90-9caa-99ebf96a5cdf | L3 agent           | openstack-network-vm | xxx   | True           |
| fb0fa712-dda3-40ab-be63-82ba7064f358 | Open vSwitch agent | openstack-network-vm | xxx   | True           |
| a06c2ecd-8756-4d18-a810-3fc78819eeb7 | DHCP agent         | debian-linux-1       | :-)   | True           |
| 625e0be6-a174-47ec-b6e7-ad707caa8df0 | L3 agent           | debian-linux-1       | :-)   | True           |
| d6e75ad6-518b-436c-8bb1-96ec429bb68c | Open vSwitch agent | debian-linux-1       | :-)   | True           |
+--------------------------------------+--------------------+----------------------+-------+----------------+

It is showing xxx in the alive column for the network node. but when I do a ps -ef |grep for the agent service I can see it running. I even restarted the following agents on network:

#  ls /etc/init.d/quantum-* | while read svc; do $svc restart; done

Restarting OpenStack Quantum DHCP agent: quantum-dhcp-agent.
Restarting OpenStack Quantum L3 agent: quantum-l3-agent.
Restarting OpenStack Quantum Metadata Agent: quantum-metadata-agent.
Restarting Openstack Quantum OpenVSwitch Plugin Agent: quantum-openvswitch-agent.

which restarted necessary quantum services. Yet, it shows xxx for the agent-list output for network node. I don't know why this would happen if the services are running.

2013-09-24 23:09:16 -0500 received badge  Notable Question (source)
2013-09-20 22:41:29 -0500 received badge  Popular Question (source)
2013-09-19 08:10:13 -0500 commented answer horizon console displays blank screen with message: noVNC ready: native WebSockets, canvas rendering

Glad to be of help :)

2013-09-18 06:56:31 -0500 received badge  Enthusiast
2013-09-17 00:46:28 -0500 received badge  Teacher (source)
2013-09-16 08:32:45 -0500 answered a question horizon console displays blank screen with message: noVNC ready: native WebSockets, canvas rendering

Are there any javascript errors that you are seeing? I was trying to do a setup on Debian Wheezy and I had the same error with the message canvas rendering but no output. I ran the javascript debugger and found that /usr/share/novnc/include/rfb.js had the following line: nova_token = token; while token was not being passed in. I commented out that line and then I could see the VM's console.

2013-09-16 08:26:55 -0500 asked a question Grizzly sandbox: VM does not get DHCP IP

Hi everyone,

I am trying to do a sandbox installation of Openstack as mentioned in multi node Grizzly sandbox guide which does an intial Virtualbox networking setup and then moves on to the actual Grizzly setup guide

I could go upto the last part of booting up a VM from the horizon UI. However, once the VM boots up it does not get the IP. After bootup, if I do a

nova list

on the controller - I do get an output:

| 665fe9ca-1474-43aa-9202-ae822f1db3da | linux_inst_1 | ACTIVE | net_proj_one=50.50.1.3 |

And even on the network node - the corresponding network's host file as present in:

/var/lib/quantum/dhcp/02f517d6-7913-46eb-95e6-a6899052ce65/host

is updated with an entry:

fa:16:3e:c4:61:dd,50-50-1-3.openstacklocal,50.50.1.3

However - the VM (I had used a TinyCore qemu linux appliance from - here does not get the IP. When I run a dhclient from within the tinycore linux guest - it just kept on sending Discover packets without any IP's assigned.

To confirm - I ran tcpdump -i any -nvv 'port 67 or port 68' on the Compute node and saw that DHCP request packets were coming in. But the same request packets were not being sent to the network node because the tcpdump command on network node was not producing any output.

As I was running VirtualBox to host the setup - I checked the network node's Promiscuous mode setting and they were correct. I verified this by running tcpdump -i any -nvv 'port 67 or port 68' on the network node and doing dhclient from a separated VM - the DHCP packet was received at network. However within the openstack setup - the tinycore linux guest sends request to the compute but it is not sent to the network node.

Can anyone please help me out on this?

Thanks.

2013-09-10 05:07:30 -0500 answered a question Installing Controller Node: novncproxy

I followed the question on launchpad as mentioned earlier and added a way to use nova-novncproxy without doing a force install. Here's the snippet of the answer I posted:

Doing the --force-overwrite does install nova-novncproxy but further package installation cause errors like:

The following packages have unmet dependencies: nova-novncproxy : Depends: nova-common (= 2013.1-2) but 2013.1.2-2~bpo70+1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

So I uninstalled nova-nonvcproxy by dpkg --remove nova-novncproxy

and as i was setting up the controller node - I installed nova-consoleproxy which AFAIK acts as a frontend proxy for novnc, spicehtml5 and other types of console viewers. That got installed and along with it came /etc/init.d/nova-nonvcproxy

nova-consoleproxy decides which console viewer to use depending on the value set in /etc/default/nova-consoleproxy. I updated it to have novnc and then doing:

ls /etc/init.d/nova* | while read line; do $line restart; done

started nova-novncproxy along with the usual set of services.

Hope this helps.

2013-09-10 02:20:13 -0500 answered a question Error while configuring Keystone

I was getting the same 'Proxymanager' is not defined error. But it was because I had set a http_proxy in my .bashrc and /usr/share/pyshared/requests/adapters.py was using ProxyManager without importing it. I updated

from urllib3.poolmanager import PoolManager, proxy_from_url

to

from urllib3.poolmanager import PoolManager, proxy_from_url, ProxyManager

and I stopped getting that error. But I figured out that I was doing my setup locally on the controller, I don't need to go through a proxy and when I unset the proxy - the ProxyManager error did not come irrespective of the update done in adapters.py

2013-08-02 05:44:36 -0500 answered a question nova-scheduler driver setting instance to ERROR state

For me the problem was that rabbitmq has a disk free threshold limit. If you disk space goes below that threshold, rabbitmq will block connections to queues. You can check these limits via rabbitmq's mgmt console by doing

/usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management

service rabbitmq-server restart

http://your_controller:5672

2013-07-23 01:46:56 -0500 received badge  Famous Question (source)
2013-07-15 13:29:54 -0500 received badge  Notable Question (source)
2013-07-15 13:29:54 -0500 received badge  Popular Question (source)
2013-07-09 04:35:06 -0500 asked a question creating isolated networks in grizzly with nova-network

Hi everyone,

I have the following grizzly setup:

  1. controller running - nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-scheduler
  2. compute1 running - nova-compute, nova-network (with flatdhcp manager)
  3. compute2 running - nova-compute, nova-network (with flatdhcp manager)

I created a network using nova-manage like so

 nova-manage network create private --fixed_range_v4=10.1.1.129/25 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=128 --multi_host=T

I also created host-aggregates to delegate VMs to specific compute instances:

nova aggregate-create compute-1

nova aggregate-add-host 1 compute-1

nova aggregate-set-metadata 1 compute1=true

nova flavor-create --is-public=true m1.compute1 6 512 0 1

nova flavor-key 6 set compute1=true

which means that if I boot up an instance with flavor 6 - run it on compute-1 because it shares the key compute1 with host-aggregate compute-1

And similar setup for compute-2

nova aggregate-create compute-2

nova aggregate-add-host 2 compute-2

nova aggregate-set-metadata 2 compute2=true

nova flavor-create --is-public=true m1.compute2 7 512 0 1

nova flavor-key 7 set compute2=true

When I boot up an instance like so:

nova boot --flavor 6 --image cirros_img_1 cirros_inst_1

it means run the cirros image on compute-1 which openstack does succesfully. And similarly another instance can be spawned on compute-2 by doing

nova boot --flavor 7 --image cirros_img_1 cirros_inst_1

nova-list shows:

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |
| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.133 |

I had thought that as nova-network is running independently of the compute node these instances will be given ips through locally running dnsmasq processes and will be isolated. However even though cirros_inst_1 and cirros_inst_2 are running on separate computes they are able to ping each other, which shouldn't be the case.

Ideally, I was aiming for the following

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 
| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.130 |

Both instances running on separate computes, having same ip but in isolated networks. Now I understand that being dhcp I cannot control what IPs they get - will move to flatmanager once I at least get the isolation running. So even if I get both VMs in separate networks, I'd be good to go.

Am I missing something with the config or is there a gap in my understanding how this works?

2013-07-08 01:45:59 -0500 answered a question Trying to install compute node inside ESXi

Hi Vpaul,

I was getting a similar error on Grizzly + Debian/wheezy. After looking around the config file, I found the following comment:

By default with the Debian package, the spicehtml5 console is the default. To enable the NoVNC mode, enable the switch below, disable SPICE in this nova.conf file as well (see far below), then edit /etc/default/nova-consoleproxy to switch to NoVNC, shutdown the SPICE with /etc/init.d/nova-spicehtml5proxy stop, and finally start nova-novncproxy. Do not forget to restart Nova daemons and restart your VMs if you want to use NoVNC form now on (VMs video card needs to be attached to a console type, and they can accept only one video card at a time).

The last line was important. I had vnc_enabled set to true and the section for spice [spice] in nova.conf had the config vars on. I commented out the config in [spice] section and then my VM booted up as only vnc was on. So the gist is that you should have only one of them enabled.

2013-06-13 04:33:07 -0500 received badge  Famous Question (source)
2013-06-03 05:07:51 -0500 received badge  Editor (source)
2013-06-03 05:07:14 -0500 answered a question Adding port forwarding rule between host and VMs

I am still searching for a way to do this. A quick hack for doing this manually and not loosing your configuration on nova-network restart can be achieved by the following. I wouldn't recommend doing this in production environments though:

Infra: openstack folsom host - host1 + 1 vm running (instance - vm_inst1) for the image - linux_1

Requirement: For any traffic coming on port 1234 of host1 forward it to port 8000 of vm_inst1

Steps:

  1. Create a security rule which opens up port 8000 for vm_inst1

    nova secgroup-create secgroup1 'test security group' nova secgroup-add-rule secgroup1 tcp 8000 8000 0.0.0.0/0 nova boot --flavor 1 --image linux_1 --security_groups secgroup1 vm_inst1

  2. Once the VM get's the IP - flush out it's internal iptables rules to remove any confusion. Hence, inside vm_inst1:

    iptables -F

    python -m SimpleHTTPServer

  3. In nova.network.linux_net.py locate the function - metadata_forward

  4. Add the following snippet to it:

    iptables_manager.ipv4['nat'].add_rule('PREROUTING',
                                          '-s 0.0.0.0/0 -d %s/32 '
                                          '-p tcp -m tcp --dport %s -j DNAT '
                                          '--to-destination %s:%s' %
                                          (FLAGS.metadata_host,
                                           '1234',
                                           '<vm_isnt1_ip>',
                                           '8000'))
      
  5. restart nova-network

  6. This adds a nova-network-PREROUTING rule which does the requisite forwarding.

  7. Test it by going to a different host other than host1 and doing:

    telnet host1_ip 1234 this

  8. You should see "this" appearing on the python session in the VM.

But I am looking for a cleaner way to do this. Is there a plugin based approach for this - or is there a way to execute arbitary hook programs upon events like VM startup, shutdown.

I did find references to https://wiki.openstack.org/wiki/Novaplugin - just want to know that - is that the right way to go for it? If any of the stackers have used it - is there any reference code that can be looked at. There is a filesystem code available on the novaplugin page - but I am looking for a simpler example.

Hope this insight helps anyone else looking to do port forwarding.

UPDATE: From what I read on https://www.redhat.com/archives/rhos-list/2012-November/msg00022.html - there isn't a way to hook in your code but you can subscribe for notifications upon VM events.

2013-06-02 04:10:26 -0500 received badge  Supporter (source)
2013-06-02 04:10:22 -0500 received badge  Notable Question (source)
2013-06-02 04:09:25 -0500 received badge  Scholar (source)
2013-05-29 17:47:20 -0500 received badge  Student (source)
2013-05-29 17:40:43 -0500 received badge  Popular Question (source)
2013-05-29 04:37:17 -0500 commented answer Adding port forwarding rule between host and VMs

That would be good to have for filtering. But I am looking at a way to do port forwarding where in if request comes on say - port 8111 on the host, then forward it to port 80 of VM1. If request comes on port 8112 on the host, then forward it to port 8080 of VM2 and so on.

2013-05-29 04:35:59 -0500 answered a question Adding port forwarding rule between host and VMs

Hi Rakrup,

Thanks for the update. I had read about security groups earlier but that would help in opening up service ports on the VM - say if I wanted to open port 8000 on the VM - I'd create a security group with portrange 8000-8000 and allow access from all IPs.

That would be good to have for filtering. But I am looking at a way to do port forwarding where in if request comes on say - port 8111 on the host, then forward it to port 80 of VM1. If request comes on port 8112 on the host, then forward it to port 8080 of VM2 and so on.

Using security groups won't help me in port forwarding. Please correct me if I am wrong in my understanding.

2013-05-29 01:51:57 -0500 asked a question Adding port forwarding rule between host and VMs

Hi everyone,

I have an openstack setup and a RHEL VM running on it. The docs were really useful in guiding through the setup. I want to run a service on port 8000 of the VM and want that port to be forwarded from the host to the guest. For example, if a request comes on say - port 8111 on the host, then forward it to port 80 of VM1. If request comes on port 8112 on the host, then forward it to port 8080 of VM2 and so on.

Normally, I'd add a PREROUTING rule in the nat table to accomplish this. But I saw that iptables is already having some openstack specific rules and chains.

I wanted to know that for port forwarding - which is the right way to add an iptables rule - does it have to be done via some command line util or should I use an existing api to add a rule?