How to enable gre/vxlan/vlan/flat network at one cloud at the same time ? [closed]

asked 2014-01-10 01:30:12 -0500

chen-li gravatar image

updated 2014-01-10 14:26:12 -0500

smaffulli gravatar image

I’m doing some function test based on neutron + ml2 plugin. I want my cloud can support all kind of network, so I can do further comparison tests between different type of network.

So, I create 4 networks:

neutron net-list
+--------------------------------------+---------+------------------------------------------------------+
| id                                                                   | name    | subnets                                                                                        |
+--------------------------------------+---------+------------------------------------------------------+
| 1314f7bb-9b52-4db8-a677-a751e52aad0e | gre-1    | c0774200-7aff-44bd-b122-4264368947da 20.1.100.0/24      |
| 4e7d06f0-3547-446d-98ca-3adac416e370  | flat-1    | 83df18e1-ab2e-4983-8892-66d7699c4e9a 192.168.13.0/24 |
| c7e26ebc-078b-4375-b313-795a89a9d8bd | vlan-1  | 22789dfc-e41e-412c-a325-10a210f176c5 30.1.100.0/24       |
| fcd5c1a8-34ab-4e0c-9e4d-d99d168aa300  | vxlan-3 | 534558b0-c0a4-4c7e-add5-1f0abcb91cc3 40.1.100.0/24      |
+--------------------------------------+---------+------------------------------------------------------+

Because my machine only have 1 NIC port can be used for instances data network, so I start two dhcp agents:

neutron agent-list
+--------------------------------------+--------------------+-------------+-------+----------------+
| id                                   | agent_type         | host        | alive | admin_state_up |
+--------------------------------------+--------------------+-------------+-------+----------------+
| 05e23822-0966-4c7c-9b16-687484385383 | Open vSwitch agent | b-compute05 | :-)   | True           |
| 1267a2c6-f7cb-49d9-b579-18e986139878 | Open vSwitch agent | b-compute06 | :-)   | True           |
| 55f457bf-9ffe-417b-ad50-5878c8a71aab | DHCP agent         | b-compute05 | :-)   | True           |
| 928495d3-fac0-4fbf-b958-36c3627d9b18 | Open vSwitch agent | b-compute01 | :-)   | True           |
| 934c721b-8c7d-4605-8e03-400676665afc | Open vSwitch agent | b-network01 | :-)   | True           |
| bd491c90-3597-45ea-b4a0-f37610f2ed9b | DHCP agent         | b-network01 | :-)   | True           |
| e07c8133-a3f6-4864-adb2-318f2233fe63 | Linux bridge agent | b-compute02 | xxx   | True           |
| e1070c1e-fcb6-43fc-b2a0-a81e688b814a | Open vSwitch agent | b-compute02 | :-)   | True           |
+--------------------------------------+--------------------+-------------+-------+----------------+

The DHCP agent started on b-compute05 is working for network flat-1 and vlan-1. The DHCP agent started on b-network01 is working for network gre-1 and vxlan-3.

The Open vSwitch agent on b-compute05 and b-compute06 is configured to working for flat and vlan. The Open vSwitch agent on b-compute01 and b-compute02 is configured to working for vxlan and gre.

Then I start to create new instances.

Here comes the issues:

  1. Network will not be auto scheduled to the right DHCP agent. It just randomly chose one of the active DHCP agent, and ignore whether the DHCP agent can work for that type of network or not. And no error message can be found in /var/log/neutron/dhcp-agent.log. Everything looks just fine. Only, active instances will never get IP addresses from DHCP. I have to assign network to the right DHCP by hand.

  2. Similar issues to nova-scheduler. Because nova-scheduler schedules instances without awareness of what type of network compute node support. So, it will schedule instances to the wrong compute node that do not actually support the kind of network. These instances will end with error status, and with error message in /var/log/nova/compute.log:

    2014-01-10 14:59:48.454 9085 ERROR nova.compute.manager [req-f3863a12-30e9-420d-a44a-0dd9c0bd1412 c4633e89685d41c4a2d20a2234b5025e 45c69667e2a64c889719ef8d8e0dd098] [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a] Instance failed to spawn
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a] Traceback (most recent call last):
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1413, in _spawn
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a]     block_device_info)
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2067, in spawn
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a]     write_to_disk=True)
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager [instance: d477a7c1-590b-485a-ac1a-055a6fdaca3a]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3040, in to_xml
    
    2014-01-10 14:59:48.454 9085 TRACE nova.compute.manager ...
(more)
edit retag flag offensive reopen delete

Closed for the following reason question is not relevant or outdated by Closed for the following reason "question is not relevant or outdated" by larsks
close date 2014-04-03 13:53:53.545472