Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

creating isolated networks in grizzly with nova-network

Hi everyone,

I have the following grizzly setup:

  1. controller running - nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-scheduler
  2. compute1 running - nova-compute, nova-network (with flatdhcp manager)
  3. compute2 running - nova-compute, nova-network (with flatdhcp manager)

I created a network using nova-manage like so

nova-manage network create private --fixed_range_v4=10.1.1.129/25 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=128 --multi_host=T

I also created host-aggregates to delegate VMs to specific compute instances:

nova aggregate-create compute-1 nova aggregate-add-host 1 compute-1 nova aggregate-set-metadata 1 compute1=true nova flavor-create --is-public=true m1.compute1 6 512 0 1 nova flavor-key 6 set compute1=true

which means that if I boot up an instance with flavor 6 - run it on compute-1 because it shares the key compute1 with host-aggregate compute-1

And similar setup for compute-2

nova aggregate-create compute-2 nova aggregate-add-host 2 compute-2 nova aggregate-set-metadata 2 compute2=true nova flavor-create --is-public=true m1.compute2 7 512 0 1 nova flavor-key 7 set compute2=true

When I boot up an instance like so:

nova boot --flavor 6 --image cirros_img_1 cirros_inst_1

it means run the cirros image on compute-1 which openstack does succesfully. And similarly another instance can be spawned on compute-2 by doing

nova boot --flavor 7 --image cirros_img_1 cirros_inst_1

nova-list shows:

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.133 |

I had thought that as nova-network is running independently of the compute node these instances will be given ips through locally running dnsmasq processes and will be isolated. However even though cirros_inst_1 and cirros_inst_2 are running on separate computes they are able to ping each other, which shouldn't be the case.

Ideally, I was aiming for the following

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.130 |

Both instances running on separate computes, having same ip but in isolated networks. Now I understand that being dhcp I cannot control what IPs they get - will move to flatmanager once I at least get the isolation running. So even if I get both VMs in separate networks, I'd be good to go.

Am I missing something with the config or is there a gap in my understanding how this works?

creating isolated networks in grizzly with nova-network

Hi everyone,

I have the following grizzly setup:

  1. controller running - nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-scheduler
  2. compute1 running - nova-compute, nova-network (with flatdhcp manager)
  3. compute2 running - nova-compute, nova-network (with flatdhcp manager)

I created a network using nova-manage like so

nova-manage network create private --fixed_range_v4=10.1.1.129/25 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=128 --multi_host=T

I also created host-aggregates to delegate VMs to specific compute instances:

nova aggregate-create compute-1 compute-1

nova aggregate-add-host 1 compute-1 compute-1

nova aggregate-set-metadata 1 compute1=true compute1=true

nova flavor-create --is-public=true m1.compute1 6 512 0 1 1

nova flavor-key 6 set compute1=true

which means that if I boot up an instance with flavor 6 - run it on compute-1 because it shares the key compute1 with host-aggregate compute-1

And similar setup for compute-2

nova aggregate-create compute-2 compute-2

nova aggregate-add-host 2 compute-2 compute-2

nova aggregate-set-metadata 2 compute2=true compute2=true

nova flavor-create --is-public=true m1.compute2 7 512 0 1 1

nova flavor-key 7 set compute2=true

When I boot up an instance like so:

nova boot --flavor 6 --image cirros_img_1 cirros_inst_1

it means run the cirros image on compute-1 which openstack does succesfully. And similarly another instance can be spawned on compute-2 by doing

nova boot --flavor 7 --image cirros_img_1 cirros_inst_1

nova-list shows:

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.133 |

I had thought that as nova-network is running independently of the compute node these instances will be given ips through locally running dnsmasq processes and will be isolated. However even though cirros_inst_1 and cirros_inst_2 are running on separate computes they are able to ping each other, which shouldn't be the case.

Ideally, I was aiming for the following

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.130 |

Both instances running on separate computes, having same ip but in isolated networks. Now I understand that being dhcp I cannot control what IPs they get - will move to flatmanager once I at least get the isolation running. So even if I get both VMs in separate networks, I'd be good to go.

Am I missing something with the config or is there a gap in my understanding how this works?

creating isolated networks in grizzly with nova-network

Hi everyone,

I have the following grizzly setup:

  1. controller running - nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-scheduler
  2. compute1 running - nova-compute, nova-network (with flatdhcp manager)
  3. compute2 running - nova-compute, nova-network (with flatdhcp manager)

I created a network using nova-manage like so

 nova-manage network create private --fixed_range_v4=10.1.1.129/25 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=128 --multi_host=T

--multi_host=T

I also created host-aggregates to delegate VMs to specific compute instances:

nova aggregate-create compute-1

nova aggregate-add-host 1 compute-1

nova aggregate-set-metadata 1 compute1=true

nova flavor-create --is-public=true m1.compute1 6 512 0 1

nova flavor-key 6 set compute1=true

which means that if I boot up an instance with flavor 6 - run it on compute-1 because it shares the key compute1 with host-aggregate compute-1

And similar setup for compute-2

nova aggregate-create compute-2

nova aggregate-add-host 2 compute-2

nova aggregate-set-metadata 2 compute2=true

nova flavor-create --is-public=true m1.compute2 7 512 0 1

nova flavor-key 7 set compute2=true

When I boot up an instance like so:

nova boot --flavor 6 --image cirros_img_1 cirros_inst_1

it means run the cirros image on compute-1 which openstack does succesfully. And similarly another instance can be spawned on compute-2 by doing

nova boot --flavor 7 --image cirros_img_1 cirros_inst_1

nova-list shows:

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.133 |

I had thought that as nova-network is running independently of the compute node these instances will be given ips through locally running dnsmasq processes and will be isolated. However even though cirros_inst_1 and cirros_inst_2 are running on separate computes they are able to ping each other, which shouldn't be the case.

Ideally, I was aiming for the following

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.130 |

Both instances running on separate computes, having same ip but in isolated networks. Now I understand that being dhcp I cannot control what IPs they get - will move to flatmanager once I at least get the isolation running. So even if I get both VMs in separate networks, I'd be good to go.

Am I missing something with the config or is there a gap in my understanding how this works?

creating isolated networks in grizzly with nova-network

Hi everyone,

I have the following grizzly setup:

  1. controller running - nova-api, nova-cert, nova-conductor, nova-consoleauth, nova-scheduler
  2. compute1 running - nova-compute, nova-network (with flatdhcp manager)
  3. compute2 running - nova-compute, nova-network (with flatdhcp manager)

I created a network using nova-manage like so

 nova-manage network create private --fixed_range_v4=10.1.1.129/25 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=128 --multi_host=T

I also created host-aggregates to delegate VMs to specific compute instances:

nova aggregate-create compute-1

compute-1 nova aggregate-add-host 1 compute-1

compute-1 nova aggregate-set-metadata 1 compute1=true

compute1=true nova flavor-create --is-public=true m1.compute1 6 512 0 1

1 nova flavor-key 6 set compute1=true

compute1=true

which means that if I boot up an instance with flavor 6 - run it on compute-1 because it shares the key compute1 with host-aggregate compute-1

And similar setup for compute-2

nova aggregate-create compute-2

compute-2 nova aggregate-add-host 2 compute-2

compute-2 nova aggregate-set-metadata 2 compute2=true

compute2=true nova flavor-create --is-public=true m1.compute2 7 512 0 1

1 nova flavor-key 7 set compute2=true

compute2=true

When I boot up an instance like so:

nova boot --flavor 6 --image cirros_img_1 cirros_inst_1

cirros_inst_1

it means run the cirros image on compute-1 which openstack does succesfully. And similarly another instance can be spawned on compute-2 by doing

nova boot --flavor 7 --image cirros_img_1 cirros_inst_1

cirros_inst_1

nova-list shows:

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| | 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.133 |

|

I had thought that as nova-network is running independently of the compute node these instances will be given ips through locally running dnsmasq processes and will be isolated. However even though cirros_inst_1 and cirros_inst_2 are running on separate computes they are able to ping each other, which shouldn't be the case.

Ideally, I was aiming for the following

| ea6e421e-f0b3-4ffa-8c3b-ae70e2d23fa0 | cirros_inst_1 | ACTIVE | private=10.1.1.130 |

| 5ec9f21f-74db-4af3-830e-68e4de34001b | cirros_inst_2 | ACTIVE | private=10.1.1.130 |

|

Both instances running on separate computes, having same ip but in isolated networks. Now I understand that being dhcp I cannot control what IPs they get - will move to flatmanager once I at least get the isolation running. So even if I get both VMs in separate networks, I'd be good to go.

Am I missing something with the config or is there a gap in my understanding how this works?