Ask Your Question

mb-s's profile - activity

2019-04-14 17:44:16 -0500 received badge  Famous Question (source)
2017-03-01 11:33:23 -0500 received badge  Famous Question (source)
2016-06-11 02:08:34 -0500 received badge  Popular Question (source)
2015-08-14 04:01:10 -0500 received badge  Notable Question (source)
2015-08-07 03:30:18 -0500 received badge  Popular Question (source)
2015-08-07 03:30:18 -0500 received badge  Notable Question (source)
2015-02-16 23:17:01 -0500 received badge  Popular Question (source)
2012-09-10 19:37:16 -0500 answered a question OVS agent failing

Thanks dan wendlandt, that solved my question.

2012-08-09 19:38:58 -0500 asked a question OVS agent failing

This question has been asked before but I am still confused as what is going wrong in my setup. Using latest devstack and running Quantum with OVS plugin (with or without tunneling agent enabled), the agent fails when it tries to run ovs-ofctl:

2012-08-09 15:27:09 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'ovs-ofctl', 'del-flows', 'br-int'] Exit code: 1 Stdout: '' Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'

I can see (and manipulate) the bridge by using ovs-vsctl but not by using ovs-ofctl.

$ sudo ovs-vsctl show 0ec0cf06-c7f9-414f-806a-753197ba8e1b Bridge br-int Port br-int Interface br-int type: internal ovs_version: "1.2.0"

$ sudo ovs-ofctl show br-int ovs-ofctl: br-int is not a bridge or a socket

Any idea what I may be missing here. Thanks.

2012-05-15 08:50:18 -0500 answered a question Two networks, one is reachable, the other is not. Why?

Looks similar to the problem I reported a while back: https://answers.launchpad.net/quantum/+question/190861 (https://answers.launchpad.net/quantum...)

2012-05-15 08:47:05 -0500 answered a question Quantum OVS multiple networks

Looking back at this problem... I don't know if this is the reason but here is what I have noticed from looking at the entries in the nat table. Let's say I have a 10.0.0.0/24 public network. After I create a private network, say 10.6.0.0/24 Here are a couple of entries from the nat table:

SNAT all -- 10.6.0.0/24 anywhere to:9.2.156.126 SNAT all -- 10.0.0.0/24 anywhere to:9.2.156.126

This is after creating a new 10.6.0.0 network. Then if I add yet another network (10.8.0.0/24). Here are the table entries:

SNAT all -- 10.8.0.0/24 anywhere to:9.2.156.126 SNAT all -- 10.0.0.0/24 anywhere to:9.2.156.126

As you can see the entry for public network 10.0.0.0 remains but the entry for 10.6 network is replaced by the entry for the newer network. Is this how it should be?

2012-03-23 19:20:22 -0500 answered a question Quantum OVS multiple networks

Just wondering if you found any clues as to what may be going on here? Thanks.

2012-03-20 22:17:53 -0500 answered a question Quantum OVS multiple networks

Sure.

On Server A:

mb@sysnet45:~$ sudo ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=411937.037s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=3 actions=drop cookie=0x0, duration=411934.967s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=2 actions=drop cookie=0x0, duration=411334.842s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=7 actions=drop cookie=0x0, duration=412155.689s, table=0, n_packets=882468, n_bytes=111928515, priority=1 actions=NORMAL

============================================================================

mb@sysnet43:~$ sudo ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=411739.341s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=1 actions=drop cookie=0x0, duration=370942.723s, table=0, n_packets=0, n_bytes=0, priority=2,in_port=5 actions=drop cookie=0x0, duration=412054.101s, table=0, n_packets=582126, n_bytes=71555265,priority=1 actions=NORMAL mb@sysnet43:~$

2012-03-20 19:29:05 -0500 answered a question Quantum OVS multiple networks

Sure.

Here is the new data from Server A (9.2.156.126 running all of nova services):

script started on Tue 20 Mar 2012 03:20:17 PM EDT mb@sysnet45:~$ sudo iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination
nova-compute-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-network-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-manage-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-api-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0

Chain INPUT (policy ACCEPT) target prot opt source destination

Chain OUTPUT (policy ACCEPT) target prot opt source destination
nova-compute-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
nova-network-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
nova-manage-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
nova-api-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0

Chain POSTROUTING (policy ACCEPT) target prot opt source destination
nova-compute-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-network-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-manage-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-api-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
nova-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0
MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24

Chain nova-api-OUTPUT (1 references) target prot opt source destination

Chain nova-api-POSTROUTING (1 references) target prot opt source destination

Chain nova-api-PREROUTING (1 references) target prot opt source destination

Chain nova-api-float-snat (1 references) target prot opt source destination

Chain nova-api-snat (1 references) target prot opt source destination
nova-api-float-snat all -- 0.0.0.0/0 0.0.0.0/0

Chain nova-compute-OUTPUT (1 references) target prot opt source destination

Chain nova-compute-POSTROUTING (1 references) target prot opt source destination

Chain nova-compute-PREROUTING (1 references) target prot opt source destination

Chain nova-compute-float-snat (1 references) target prot opt source destination

Chain nova-compute-snat (1 references) target prot opt source destination
nova-compute-float-snat all -- 0.0.0.0/0 0.0.0.0/0

Chain nova-manage-OUTPUT (1 references) target prot opt source destination

Chain nova-manage-POSTROUTING (1 references) target prot opt source destination

Chain nova-manage-PREROUTING (1 references) target prot opt source destination

Chain nova-manage-float-snat (1 references) target prot opt source destination

Chain nova-manage-snat (1 references) target prot opt source destination
nova-manage-float-snat all -- 0.0.0.0/0 0.0.0.0/0
SNAT all -- 9.9.9.0/24 0.0.0.0/0 to:9.2.156.126

Chain nova-network-OUTPUT (1 references) target prot opt source destination

Chain nova-network-POSTROUTING (1 references) target prot opt source destination
ACCEPT all -- 10.0.0.0/24 10.128.0.0/24
ACCEPT all -- 10.0.0.0/24 10.0.0 ...

(more)

2012-03-20 16:14:21 -0500 answered a question Quantum OVS multiple networks

Yes, 9.2.156.126 is the public IP of Server A (eth0). Here is the output of iptables for this server followed by that of Server B (9.2.156.124):

mb@sysnet45: ~mb@sysnet45:~$ iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination
nova-compute-INPUT all -- anywhere anywhere
nova-network-INPUT all -- anywhere anywhere
nova-manage-INPUT all -- anywhere anywhere
nova-api-INPUT all -- anywhere anywhere
ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT gre -- anywhere anywhere

Chain FORWARD (policy ACCEPT) target prot opt source destination
nova-filter-top all -- anywhere anywhere
nova-compute-FORWARD all -- anywhere anywhere
nova-network-FORWARD all -- anywhere anywhere
nova-manage-FORWARD all -- anywhere anywhere
nova-api-FORWARD all -- anywhere anywhere
ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable

Chain OUTPUT (policy ACCEPT) target prot opt source destination
nova-filter-top all -- anywhere anywhere
nova-compute-OUTPUT all -- anywhere anywhere
nova-network-OUTPUT all -- anywhere anywhere
nova-manage-OUTPUT all -- anywhere anywhere
nova-api-OUTPUT all -- anywhere anywhere

Chain nova-api-FORWARD (1 references) target prot opt source destination

Chain nova-api-INPUT (1 references) target prot opt source destination
ACCEPT tcp -- anywhere http://sysnet45.watson.ibm.com tcp dpt:8775

Chain nova-api-OUTPUT (1 references) target prot opt source destination

Chain nova-api-local (1 references) target prot opt source destination

Chain nova-compute-FORWARD (1 references) target prot opt source destination

Chain nova-compute-INPUT (1 references) target prot opt source destination

Chain nova-compute-OUTPUT (1 references) target prot opt source destination

Chain nova-compute-inst-12 (1 references) target prot opt source destination
DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere
ACCEPT udp -- http://reserved-9-9-9-1.atlanta.ibm.com anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 9.9.9.0/24 anywhere
nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-inst-3 (1 references) target prot opt source destination
DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere
ACCEPT udp -- sysnet45.local anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 9.9.8.0/24 anywhere
nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-inst-7 (1 references) target prot opt source destination
DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere
ACCEPT udp -- http://reserved-9-9-9-1.atlanta.ibm.com anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 9.9.9.0/24 anywhere
nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-inst-9 (1 references) target prot opt source destination
DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere
ACCEPT udp -- sysnet45.local anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 9.9.8.0/24 anywhere
nova-compute-sg-fallback all -- anywhere anywhere

Chain nova-compute-local (1 references) target prot opt source destination
nova-compute-inst-3 all -- anywhere 9.9.8.2
nova-compute-inst-7 all -- anywhere http://reserved-9-9-9-2.atlanta.ibm.com nova-compute-inst-9 all -- anywhere 9.9.8.5
nova-compute-inst-12 all -- anywhere http://reserved-9-9-9-4.atlanta.ibm.com

Chain ... (more)

2012-03-19 17:53:32 -0500 answered a question Quantum OVS multiple networks

Looks like things do not get passed the 9.9.9.0 gateway.

Below I am copying the output for both gateways and for two cases: first where the ping is not successful and then the case where ping is successful. (I can attach the complete output of these files if i can figure out how to do it here.)

Case 1 - Here is the case where 9.9.9.2 on Server A canNOT ping 9.9.8.5 on Server A:

on the 9.9.9.0 gatway:

13:44:29.433031 IP (tos 0xc0, ttl 64, id 7316, offset 0, flags [none], proto ICMP (1), length 367) 9.9.9.2 > 9.2.156.126: ICMP 9.9.9.2 udp port 68 unreachable, length 347 IP (tos 0x0, ttl 64, id 58981, offset 0, flags [none], proto UDP (17), length 339) 9.2.156.126.67 > 9.9.9.2.68: [udp sum ok] BOOTP/DHCP, Reply, length 311, xid 0x7a24b831, Flags [none] (0x0000) Client-IP 9.9.9.2 Your-IP 9.9.9.2 Server-IP 9.9.9.1 Client-Ethernet-Address 02:16:3e:22:06:15 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: ACK Server-ID Option 54, length 4: 9.9.9.1 Lease-Time Option 51, length 4: 120 RN Option 58, length 4: 56 RB Option 59, length 4: 101 Subnet-Mask Option 1, length 4: 255.255.255.0 BR Option 28, length 4: 9.9.9.255 Default-Gateway Option 3, length 4: 9.9.9.1 Domain-Name-Server Option 6, length 4: 9.9.9.1 Domain-Name Option 15, length 9: "novalocal" Hostname Option 12, length 6: "host-9" 13:44:29.745432 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 9.9.9.2 > 9.9.8.5: ICMP echo request, id 26981, seq 13, length 64 13:44:30.746177 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 9.9.9.2 > 9.9.8.5: ICMP echo request, id 26981, seq 14, length 64 13:44:31.746438 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 9.9.9.2 > 9.9.8.5: ICMP echo request, id 26981, seq 15, length 64

on the 9.9.8.0 gatway:

nothing here

--------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------------------------------------

Case 2 - Here is the case where 9.9.9.2 on Server A can ping 9.9.8.4 on Server B:

on the 9.9.9.0 gatway:

13:39:03.856916 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 9.9.9.2 > 9.9.8.4: ICMP echo request, id 17765, seq 96, length 64 13:39:03.857260 IP (tos 0x0, ttl 63, id 6921, offset 0, flags [none], proto ICMP (1), length 84) 9.9.8.4 > 9.9.9.2: ICMP echo reply, id 17765, seq 96, length 64 ... (more)

2012-03-19 14:27:50 -0500 answered a question Quantum OVS multiple networks

Thanks for the response. Server A and Server B are connected to each other through their eth1 interfaces which are connected to a single switch with no other connections. (eth0 on both servers are connected to our internal network and Internet.) I am ot sure about what this particular switch does but I would thing it is not tha cause of our issues because traffic gets passed through it for some of the ping operations.

There are no other interfaces of the OVS as listed below.

Please let me know if there are other pieces of information that can be helpful.

Thanks,

-Mohammad

On Server A: The three gateways are for my three networks and the tap interfaces are for the four running VMs right now.

mb@sysnet45:~$ sudo ovs-vsctl list-br br-int mb@sysnet45:~$ sudo ovs-vsctl list-ports br-int eth0 gw-9ad6270b-f7 gw-e84cecf1-06 gw-f25ffc5e-d3 tapaf08a421-b7 tapc9e6c971-b4 tapd56b38e0-66 tapf059b70a-ec

On Server B: the tap interfaces are for the five running VMs right now.

mb@sysnet43:~$ sudo ovs-vsctl list-br br-int mb@sysnet43:~$ sudo ovs-vsctl list-ports br-int eth0 tap062e466f-c7 tap0abbbcb6-72 tap2d7c1378-fe tap92d4d472-95 tap961b958b-d1

2012-03-16 21:30:03 -0500 answered a question multi-nic VMs come up with one interface

Thank you very much.

2012-03-16 15:55:10 -0500 asked a question multi-nic VMs come up with one interface

When I create a second network for my project and then create a VM, on the dashboard screen I see two IP addresses have been assigned to the VM one from each network that the project can see. I can also see that two tap devices are created.

However, when I log into the VM I see only one interface (rather than two) is created as shown by ifconfig. Has there been a change or again I am missing something? Thanks.

2012-03-16 15:44:18 -0500 answered a question To isolate or not to isolate

Closing this and reopening another question with clearer description of the problem.

2012-03-16 15:43:03 -0500 asked a question Quantum OVS multiple networks

Here is a question that I have not been able to find an answer to. I try to make the problem as clear as possible:

I have two servers: Server A and Server B. Server A installed using devstack runs all the services including Quantum with OVS. Server B is a Compute node only with Quantum OVS agent.

I create two networks: one for the Admin project (9.9.9.0/24) and one for the Demo project (9.9.8.0/24) .

Then I start creating VMs for each project which get created alternatively on Server A and Server B. After creating a few VMs I have the following:

Server A VMs: admin1, admin3, admin5, demo1, demo3, demo5 (all having correct IP like 9.9.9.2 and 9.9.8.2, etc) Server B VMs: admin2, admin4, admin6, demo2, demo4, demo6 (all having correct IP like 9.9.9.3 and 9.9.8.3, etc)

Now from VMs on each network I can ping other VMs on the same network. So far so good.

Then I try pining nodes on the other network and I notice the following:

From each VM on Server A I can also ping the VMs on Server B which are on the other network. For example from admin1, I can ping demo2, demo4, and demo6 even though I cannot ping demo1, demo3, and demo5.

Why? What am I doing wrong?

This is the traceroute from 9.9.9.2 on Server A to 9.9.8.4 on server B. The ping works while it should not.

% traceroute to 9.9.8.4 (9.9.8.4), 30 hops max, 46 byte packets 1 http://reserved-9-9-9-1.atlanta.ibm.com (9.9.9.1) 0.477 ms 0.183 ms 0.188 ms 2 9.9.8.4 (9.9.8.4) 0.988 ms 0.596 ms 0.522 ms

This is the traceroute from 9.9.9.2 on Server A to 9.9.8.5 on Server A. The ping does not work as expected: % traceroute to 9.9.8.5 (9.9.8.5), 30 hops max, 46 byte packets 1 http://reserved-9-9-9-1.atlanta.ibm.com (9.9.9.1) 0.444 ms 0.239 ms 0.223 ms 2 * * * 3 * * *

Here is the route table on Server A:

mb@sysnet45:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 9.2.156.65 0.0.0.0 UG 0 0 0 eth1 9.2.156.64 * 255.255.255.192 U 0 0 0 eth1 9.9.8.0 * 255.255.255.0 U 0 0 0 gw-f25ffc5e-d3 9.9.9.0 * 255.255.255.0 U 0 0 0 gw-e84cecf1-06 10.0.0.0 * 255.255.255.0 U 0 0 0 gw-9ad6270b-f7 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0

Here is the ... (more)

2012-03-08 18:42:21 -0500 answered a question To isolate or not to isolate

No the problem is still There. The thing is that there is not much one can gather from tcpdump as I run it. That's why I asked how you want to run it? I can also collect tracerout on each VM. Is that of any value?

2012-02-27 23:08:28 -0500 answered a question To isolate or not to isolate

Thanks for the quick response. In my email I mentioned 10.0.0.2 but I tried other VMs. I removed 10.0.0.2 VM and created new ones (10.0.0.6 and 10.0.0.7), I even stopped nova and Quantum alltogether and restarted from scratch. All cases lead to the same pattern. Regarding tcpdump, where do you want me to collect the data from? From the main nodes? I am not sure how to collect the details you mentioned above, thanks again for your help.

2012-02-27 21:47:21 -0500 answered a question To isolate or not to isolate

Thanks for the response. To verify the wierd behavior wrt ping VMs, I followed your suggestion. I created a network using the project-id of my user that I can get from nova "instances" table. I created two VMs on 10.0.0.0 which is my shared public network and two VMs on my 6.6.6.0 private network. I can verify that on each node of my two-node system, I have one VM from each network. I have repeated this with different networks and several iterations. To my surprise I always see the same pattern as reported earlier:

from 6.6.6.2( on node 1) I can ping 6.6.6.3 (on node 2), and 10.0.0.3 (on node 2) but not 10.0.0.2 (on node 1) from 6.6.6.3 I can ping 6.6.6.2, and 10.0.0.3 but not 10.0.0.2

2012-02-27 01:05:42 -0500 answered a question To isolate or not to isolate

Looking at the newly added (or soon to be added) code for OVS, in particular the Tunneling switch stuff, the above may not be very strange. For having inter-hypervisor connection, should we wait for the new code? Thanks.

2012-02-24 22:54:20 -0500 asked a question To isolate or not to isolate

Here are several questions (seven to be precise) regarding my Quantum setup. Any help is much appreciated.

I have a two node system running openstack; I install the first node using devstack with all the services specified there by default and Quantum with openvswitch. (Essentially everything but Swift.) On the second node, I only start n-cpu and q-agt using devstack with a few minor changes to that script. I can create VMs on both machines using Dashboard. Everything looks good. On both machines I connect eth0 to my br-int switch. (I have eth1 for outside world but don't need that for my current experiments.) I have the 10.0.0.0 network for my VMs as specified in devstack.

Then I go about creating a second network. I do that from the cli interface.

Question 0: I know Dan explained once why it is better not to have a way to create a network from dashboard but I am still confused as why this is the case.

Having no other option I try the CLI but I cannot find the tenant id using the following instruction: keystone-manage tenant list. Nothing is printed out when this is executed.

Question 1: Why is that even though I have the keystone service running. Question 2: Are there other ways of finding the tenant id? I do not seem to find it in the nova tables in the database.

So I try the following command for creating a new network: nova-manage network create --label=secondNet --fixed_range_v4=8.8.8.0/24

I get a second network and create a bunch of VMs some using only 10.0.0.0 network and some using only the 8.8.8.0 network. (I have changed the GUI so I can pick the network at Laucnh.) Here is what I have:

Node1:

running all services has two gw-* interface (one for each network) and two tap-* interface for two VMs The IP address of VM 1: 10.0.0.2 The IP address of VM 2: 8.8.8.2

mb:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 9.2.xxx.65 0.0.0.0 UG 100 0 0 eth1 8.8.8.0 * 255.255.255.0 U 0 0 0 gw-c55299df-9c 9.2.xxx.64 * 255.255.255.192 U 0 0 0 eth1 10.0.0.0 * 255.255.255.0 U 0 0 0 gw-c21c442b-f3 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0

Node 2:

running n-cpu and q-agt has two tap-* interface for two VMs The IP address of VM 1: 10.0.0.3 The IP address of VM 2: 8.8.8.3

mb:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 9.2.xxx.65 0.0.0.0 UG 100 0 0 eth1 localnet * 255 ... (more)

2012-02-24 00:10:27 -0500 answered a question Second node with quantum (and openvswitch)

Thanks Dan. The problem was due to mismatch between the code on two nodes. There is a new field in one of the tables which did not exist in the older version. Updating to the latest version resolved the problem.

2012-02-16 23:11:47 -0500 asked a question Second node with quantum (and openvswitch)

Trying to add a compute node to the system (installed using devstack and running all services + quantum+q-svc+q-agt), do I need to run anything other than n-cpu and q-agt? (One need to have quantum enables as a service to get the quantum installed.)

VMs create on the first node (controller with all services) get spawned without any problems. The Quantum agent does not see the addition of a new interface and therefore does not do anything. The log of nova-cpu on the second node shows the following. Eventhough it seems that the second node sees the AMQP server on the first node and gets connected to it, the network setup fails while going through the AMQP server.

... 2012-02-16 18:00:39,448 INFO nova.rpc.common [-] Connected to AMQP server on osserver:5672 2012-02-16 18:00:39,578 ERROR nova.compute.manager [-] Instance 564fa044-6544-45ad-8893-413d0f64d526 failed network setup (nova.compute.manager): TRACE: Traceback (most recent call last): (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 522, in _allocate_network (nova.compute.manager): TRACE: requested_networks=requested_networks) (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/network/api.py", line 164, in allocate_for_instance (nova.compute.manager): TRACE: 'args': args}) (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/rpc/__init__.py", line 69, in call (nova.compute.manager): TRACE: return _get_impl().call(context, topic, msg, timeout) (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/rpc/impl_kombu.py", line 612, in call (nova.compute.manager): TRACE: return rpc_amqp.call(context, topic, msg, timeout, Connection.pool) (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/rpc/amqp.py", line 334, in call (nova.compute.manager): TRACE: rv = list(rv) (nova.compute.manager): TRACE: File "/opt/stack/nova/nova/rpc/amqp.py", line 302, in __iter__ (nova.compute.manager): TRACE: raise result (nova.compute.manager): TRACE: RemoteError: Remote error: KeyError 'instance_type_id' (nova.compute.manager): TRACE: [u'Traceback (most recent call last):\n', u' File "/opt/stack/nova/nova/rpc/amqp.py", line 249, in _process_data\n rval = node_func(context=ctxt, **node_args)\n', u' File "/opt/stack/nova/nova/network/quantum/manager.py", line 283, in allocate_for_instance\n instance_type_id = kwargs[\'instance_type_id\']\n', u"KeyError: 'instance_type_id'\n"]. (nova.compute.manager): TRACE: