Using VlanManager, no network access to instances running on another nova-compute host.

asked 2011-08-05 18:15:59 -0500

p-spencer-davis gravatar image

I have a setup with two hosts, one running as management/compute node and the other as a pure compute node. I am able to run instances on both hosts and have network access to any instances that are running on the combined master/compute node. The instances that are running on the pure compute node however are inaccessible from the network. Both hosts have two active nics, eth0 is the public interface on a 10.169.30.128/25 network and eth1 is a 172.16.0.0/16 private network. I have defined an virtual network for the project to run in as follows: nova-manage network create --label=public --fixed_range_v4=192.168.1.0/24 --num_networks=1 --network_size=256 --vlan=1 --bridge=vlan1 --dns1=10.0.4.7

Here is the configuration that nova-manage reports:

--storage_availability_zone=nova --ca_file=cacert.pem --ec2_dmz_host=$my_ip --fixed_range=172.16.0.0/16 --compute_topic=compute --dmz_mask=255.255.255.0 --fixed_range_v6=fd00::/48 --glance_api_servers=10.192.30.137:9292 --rabbit_password=guest --user_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s --s3_dmz=10.192.30.137 --quota_ram=51200 --find_host_timeout=30 --aws_access_key_id=admin --vncserver_host=0.0.0.0 --network_size=1024 --enable_new_services --my_ip=10.192.30.137 --live_migration_retry_count=30 --lockout_attempts=5 --credential_cert_file=cert.pem --quota_max_injected_files=5 --zone_capabilities=hypervisor=xenserver;kvm,os=linux;windows --logdir=/var/log/nova --sqlite_db=nova.sqlite --nouse_forwarded_for --cpuinfo_xml_template=/usr/lib/pymodules/python2.7/nova/virt/cpuinfo.xml.template --num_networks=1 --boot_script_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/bootscript.template --live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER --notification_driver=nova.notifier.no_op_notifier --osapi_max_limit=1000 --rabbit_port=5672 --s3_access_key=notchecked --rabbit_max_retries=12 --noresume_guests_state_on_host_boot --ajax_console_proxy_url=http://127.0.0.1:8000 --injected_network_template=/usr/lib/pymodules/python2.7/nova/virt/interfaces.template --network_host=10.192.30.137 --snapshot_name_template=snapshot-%08x --vncproxy_url=http://10.192.30.137:6080 --s3_secret_key=notchecked --ajax_console_proxy_topic=ajax_proxy --minimum_root_size=10737418240 --quota_cores=20 --nouse_project_ca --rabbit_userid=guest --volume_topic=volume --volume_name_template=volume-%08x --lock_path=/var/lock/nova --live_migration_uri=qemu+tcp://%s/system --flat_network_dns=8.8.4.4 --live_migration_bandwidth=0 --connection_type=libvirt --noupdate_dhcp_on_disassociate --default_project=openstack --s3_port=3333 --logfile_mode=420 --logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(message)s --instance_name_template=instance-%08x --ec2_host=$my_ip --credential_key_file=pk.pem --vpn_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-vpn-%s-%s --logging_debug_format_suffix=from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d --stub_network=False --console_manager=nova.console.manager.ConsoleProxyManager --rpc_backend=nova.rpc.amqp --default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,eventlet.wsgi.server=WARN --osapi_scheme=http --credential_rc_file=%src --sql_connection=mysql://nova:nova@10.192.30.137/nova --console_topic=console --instances_path=$state_path/instances --flat_injected --use_local_volumes --host=csvirt-1 --fixed_ip_disassociate_timeout=600 --console_host=csvirt-1 --quota_instances=10 --quota_max_injected_file_content_bytes=10240 --libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtBridgeDriver --floating_range=4.4.4.0/24 --nomulti_host --lockout_window=15 --db_backend=sqlalchemy --credentials_template=/usr/lib/pymodules/python2.7/nova/auth/novarc.template --dmz_net=10.0.0.0 --sql_retry_interval=10 --vpn_start=1000 --volume_driver=nova.volume.driver.ISCSIDriver --crl_file=crl.pem --rpc_conn_pool_size=30 --s3_host=10.192.30.137 --qemu_img=qemu-img --max_nbd_devices=16 --vlan_interface ... (more)

edit retag flag offensive close merge delete

6 answers

Sort by ยป oldest newest most voted
0

answered 2011-08-05 20:10:04 -0500

p-spencer-davis gravatar image

I've replaced the switch with the cheap consumer device and it works, Now I just need to reprogram the other one. Thanks for your help!

edit flag offensive delete link more
0

answered 2011-08-05 20:10:33 -0500

p-spencer-davis gravatar image

Thanks Vish Ishaya, that solved my question.

edit flag offensive delete link more
0

answered 2011-08-05 18:31:31 -0500

vishvananda gravatar image

Did you set up vlan 1 on your switch that eth1 is plugged into? if you ifconfig do you see traffic in both directions on vlan1?

Vish

On Aug 5, 2011, at 11:16 AM, P Spencer Davis wrote:

New question #167077 on OpenStack Compute (nova): https://answers.launchpad.net/nova/+q...

I have a setup with two hosts, one running as management/compute node and the other as a pure compute node. I am able to run instances on both hosts and have network access to any instances that are running on the combined master/compute node. The instances that are running on the pure compute node however are inaccessible from the network. Both hosts have two active nics, eth0 is the public interface on a 10.169.30.128/25 network and eth1 is a 172.16.0.0/16 private network. I have defined an virtual network for the project to run in as follows: nova-manage network create --label=public --fixed_range_v4=192.168.1.0/24 --num_networks=1 --network_size=256 --vlan=1 --bridge=vlan1 --dns1=10.0.4.7

Here is the configuration that nova-manage reports:

--storage_availability_zone=nova --ca_file=cacert.pem --ec2_dmz_host=$my_ip --fixed_range=172.16.0.0/16 --compute_topic=compute --dmz_mask=255.255.255.0 --fixed_range_v6=fd00::/48 --glance_api_servers=10.192.30.137:9292 --rabbit_password=guest --user_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s --s3_dmz=10.192.30.137 --quota_ram=51200 --find_host_timeout=30 --aws_access_key_id=admin --vncserver_host=0.0.0.0 --network_size=1024 --enable_new_services --my_ip=10.192.30.137 --live_migration_retry_count=30 --lockout_attempts=5 --credential_cert_file=cert.pem --quota_max_injected_files=5 --zone_capabilities=hypervisor=xenserver;kvm,os=linux;windows --logdir=/var/log/nova --sqlite_db=nova.sqlite --nouse_forwarded_for --cpuinfo_xml_template=/usr/lib/pymodules/python2.7/nova/virt/cpuinfo.xml.template --num_networks=1 --boot_script_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/bootscript.template --live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER --notification_driver=nova.notifier.no_op_notifier --osapi_max_limit=1000 --rabbit_port=5672 --s3_access_key=notchecked --rabbit_max_retries=12 --noresume_guests_state_on_host_boot --ajax_console_proxy_url=http://127.0.0.1:8000 --injected_network_template=/usr/lib/pymodules/python2.7/nova/virt/interfaces.template --network_host=10.192.30.137 --snapshot_name_template=snapshot-%08x --vncproxy_url=http://10.192.30.137:6080 --s3_secret_key=notchecked --ajax_console_proxy_topic=ajax_proxy --minimum_root_size=10737418240 --quota_cores=20 --nouse_project_ca --rabbit_userid=guest --volume_topic=volume --volume_name_template=volume-%08x --lock_path=/var/lock/nova --live_migration_uri=qemu+tcp://%s/system --flat_network_dns=8.8.4.4 --live_migration_bandwidth=0 --connection_type=libvirt --noupdate_dhcp_on_disassociate --default_project=openstack --s3_port=3333 --logfile_mode=420 --logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(message)s --instance_name_template=instance-%08x --ec2_host=$my_ip --credential_key_file=pk.pem --vpn_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-vpn-%s-%s --logging_debug_format_suffix=from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d --stub_network=False --console_manager=nova.console.manager.ConsoleProxyManager --rpc_backend=nova.rpc.amqp --default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,eventlet.wsgi.server=WARN --osapi_scheme=http --credential_rc_file=%src --sql_connection=mysql://nova:nova@10.192.30.137/nova --console_topic=console --instances_path=$state_path/instances --flat_injected --use_local_volumes --host=csvirt-1 --fixed_ip_disassociate_timeout=600 --console_host=csvirt-1 --quota_instances=10 --quota_max_injected_file_content_bytes=10240 --libvirt_vif_driver=nova.virt.libvirt ...

(more)
edit flag offensive delete link more
0

answered 2011-08-05 19:05:45 -0500

p-spencer-davis gravatar image

Vish. I did not have to set up vlans on the switch when I was testing this previously in my office using consumer grade switches. The problem only showed up when the servers where racked in their final location. (I was using the trunk build from about a month ago when I was doing my initial tests). I'll see if the hardware supports vlan creation, but I was under the impression that the vlans where created in software on the compute nodes using brctl and iptables...

Here is the output of ifconfig from the management/compute node:

br1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::5c46:aff:fe12:3298/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:338 errors:0 dropped:0 overruns:0 frame:0 TX packets:378 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45232 (45.2 KB) TX bytes:55288 (55.2 KB)

eth0 Link encap:Ethernet HWaddr 14:fe:b5:db:29:78
inet addr:10.192.30.137 Bcast:10.192.30.255 Mask:255.255.255.128 inet6 addr: fe80::16fe:b5ff:fedb:2978/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:113195 errors:0 dropped:0 overruns:0 frame:0 TX packets:1035491 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10605942 (10.6 MB) TX bytes:1556998125 (1.5 GB) Interrupt:36 Memory:d6000000-d6012800

eth1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
inet addr:172.16.0.100 Bcast:172.16.255.255 Mask:255.255.0.0 inet6 addr: fe80::16fe:b5ff:fedb:297a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:410 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54684 (54.6 KB) TX bytes:11664 (11.6 KB) Interrupt:48 Memory:d8000000-d8012800

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:29319 errors:0 dropped:0 overruns:0 frame:0 TX packets:29319 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:21201705 (21.2 MB) TX bytes:21201705 (21.2 MB)

virbr0 Link encap:Ethernet HWaddr f6:6d:6c:82:c5:0b
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vlan1 Link encap:Ethernet HWaddr ... (more)

edit flag offensive delete link more
0

answered 2011-08-05 19:26:09 -0500

p-spencer-davis gravatar image

Previously, I was using the method for hand editing the network table laid out in the Admin documentation for using VlanManager, but with recent changes to the operation of nova-manage network create, I didn't think that was necessary.

http://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-vlan-networking.html (http://docs.openstack.org/cactus/open...)

Update the DB to match your network settings. The following script will generate SQL based on the predetermined settings for this example. You will need to modify this database update to fit your environment.

if [ -z $1 ]; then echo "You need to specify the vlan to modify" fi

if [ -z $2 ]; then echo "You need to specify a network id number (check the DB for the network you want to update)" fi

VLAN=$1 ID=$2

cat > vlan.sql << __EOF_ update networks set vlan = '$VLAN' where id = $ID; update networks set bridge = 'br_$VLAN' where id = $ID; update networks set gateway = '10.1.$VLAN.7' where id = $ID; update networks set dhcp_start = '10.1.$VLAN.8' where id = $ID; update fixed_ips set reserved = 1 where address in ('10.1.$VLAN.1','10.1.$VLAN.2','10.1.$VLAN.3','10.1.$VLAN.4','10.1.$VLAN.5','10.1.$VLAN.6','10.1.$VLAN.7'); __EOF_ After verifying that the above SQL will work for your environment, run it against the nova database, once for every VLAN you have in the environment.

edit flag offensive delete link more
0

answered 2011-08-05 19:34:00 -0500

vishvananda gravatar image

On Aug 5, 2011, at 12:06 PM, P Spencer Davis wrote:

vlan1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
inet6 addr: fe80::16fe:b5ff:fedb:297a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:96 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:6744 (6.7 KB)

As I suspected, no traffic on the vlan. You have to trunk vlan 1 on your switch for all ports that your hosts are connected to.

Vish

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-08-05 18:15:59 -0500

Seen: 60 times

Last updated: Aug 05 '11