Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Short answer is the GRE mesh is created because at start up, the OVS agent on a compute node sends RPCs to the controller to solicit tunnel notifications. As a result, the controller will create a tunnel to the compute node (using the compute node's IP address as the end point), and tell all other compute nodes about the compute node, causing them to create tunnels too. The compute node at this time will also be sent a list of all of the tunnels it needs to create. The agent then forever sits in a loop, listing for notifications about other compute nodes leaving and joining the cluster via tunnel notifications. For each notification, it will create (and destroy) tunnels to these nodes as a result. Simple algorithm, actually.

Here's my basic flow on the command line for starting and pinging nodes in a GRE-meshed network. I'm not a horizon user (ironic, since I do a lot of Django development, but oh well :-) but perhaps there is an equivalent in horizon. I don't think there is anything keeping you from using both horizon and command line to experiment with things before you go looking for a horizon-only solution The key takeaway is the need to configure security groups to allow pings and to issue the pings from within a namespace.

On the controller:

$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

The above enables both ping and ssh. I do both because ssh'ing into VMs is something I do a lot of, but you probably only have to add the icmp rule for ping.

The rest of the flow:

$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID                                   | Name                            | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| 651d944f-a532-43ee-b938-2682a6c66159 | cirros-0.3.1-x86_64-uec         | ACTIVE |        |
| 29290504-b981-49c1-8641-0ba14643c73a | cirros-0.3.1-x86_64-uec-kernel  | ACTIVE |        |
| b972d37e-e09d-42e6-8967-511f62ce064b | cirros-0.3.1-x86_64-uec-ramdisk | ACTIVE |        |
+--------------------------------------+---------------------------------+--------+--------+

$  neutron net-list
+--------------------------------------+---------+--------------------------------------------------+
| id                                   | name    | subnets                                          |
+--------------------------------------+---------+--------------------------------------------------+
| ee03906b-e70e-45de-9a7b-6dbbc5b38916 | private | 4e5838f4-9c51-4301-9d55-896fcdb5ad47 10.0.0.0/20 |
+--------------------------------------+---------+--------------------------------------------------+

$ nova boot --image 651d944f-a532-43ee-b938-2682a6c66159 --flavor 1 --nic net-id=ee03906b-e70e-45de-9a7b-6dbbc5b38916 test6

<wait here="" for="" 10="" seconds="" to="" let="" the="" vm="" spin="" up="" and="" get="" an="" ip="" addr="">

$ nova list
| ID                                   | Name  | Status | Task State | Power State | Networks         |
+--------------------------------------+-------+--------+------------+-------------+------------------+
| e87f3686-82ed-4048-9c4f-ff3bcfc578f6 | test1 | ACTIVE | None       | Running     | private=10.0.0.2 |
| faed2ef9-c6a4-4b4b-87cd-e027e1d81f57 | test2 | ACTIVE | None       | Running     | private=10.0.0.3 |
| 2642d183-d687-42f4-918e-c42953a0723c | test3 | ACTIVE | None       | Running     | private=10.0.0.4 |
| f9763c59-fca7-4e79-962b-6d8b592e22c5 | test4 | ACTIVE | None       | Running     | private=10.0.0.5 |
| 41ae9116-47cd-4a93-a1de-0164e386bad3 | test5 | ACTIVE | None       | Running     | private=10.0.0.6 |
| 3d86d3ed-cfd5-4fb5-8629-4f4504516d1b | test6 | ACTIVE | None       | Running     | private=10.0.0.7 |
+--------------------------------------+-------+--------+------------+-------------+------------------+

$ ip netns list
qdhcp-ee03906b-e70e-45de-9a7b-6dbbc5b38916

$ sudo ip netns exec qdhcp-ee03906b-e70e-45de-9a7b-6dbbc5b38916 bash
# ping 10.0.0.5
64 bytes from 10.0.0.5: icmp_req=1 ttl=64 time=1.77 ms
64 bytes from 10.0.0.5: icmp_req=2 ttl=64 time=0.527 ms
64 bytes from 10.0.0.5: icmp_req=3 ttl=64 time=0.908 ms
64 bytes from 10.0.0.5: icmp_req=4 ttl=64 time=0.504 ms
64 bytes from 10.0.0.5: icmp_req=5 ttl=64 time=0.870 ms

Hope this helps in some way.

syd