Ask Your Question

kaaaas54's profile - activity

2015-03-18 23:07:59 -0600 received badge  Notable Question (source)
2015-03-18 23:07:59 -0600 received badge  Famous Question (source)
2014-12-02 10:33:55 -0600 received badge  Famous Question (source)
2014-10-26 12:54:19 -0600 received badge  Notable Question (source)
2014-10-09 03:31:24 -0600 received badge  Popular Question (source)
2014-10-06 11:23:47 -0600 received badge  Famous Question (source)
2014-10-05 05:26:23 -0600 received badge  Notable Question (source)
2014-09-30 02:14:00 -0600 received badge  Popular Question (source)
2014-09-24 10:40:58 -0600 received badge  Editor (source)
2014-09-24 10:39:27 -0600 asked a question neutron setup with virtual interfaces

Hello,

im trying to setup an OpenStack cluster. At the moment, I have three nodes, each node has only one network interface. Following this guide ( http://docs.openstack.org/trunk/insta... + Neutron setup), I do need two NICs for the Compute and three for the network node. At the moment, im using virtual interfaces (ip aliases).

  1. Controller

    auto eth0
    iface eth0 inet static
                address 11.0.0.11
                netmask 255.255.255.0
                gateway 11.0.0.1
                dns-nameservers 11.0.0.1
                dns-search lan
    
  2. Network

    auto eth0
    iface eth0 inet static
                address 11.0.0.21
                netmask 255.255.255.0
                gateway 11.0.0.1
                dns-nameservers 11.0.0.1
                dns-search lan
    
    auto eth0:1
    iface eth0:1 inet static
                address 11.0.1.21
                netmask 255.255.255.0
    
    auto eth0:2
    iface eth0:2 inet manual
                up ip link set dev $IFACE up
                down ip link set dev $IFACE down
    
  3. Compute

    auto eth0
    iface eth0 inet static
                address 11.0.0.31
                netmask 255.255.255.0
                gateway 11.0.0.1
                dns-nameservers 11.0.0.1
                dns-search lan
    
    auto eth0:1
    iface eth0:1 inet static
                address 11.0.1.31
                netmask 255.255.255.0
    

All nodes are connected to a router, which is directly connected to the internet. The router has three IP adresses (Public IP: xxx.xxx.xxx.xxx, LAN: 11.0.0.1 and 192.168.0.1).

As in the guide, I created (via neutron) a external network (ext-net with subnet, CIDR 192.168.0.0/24, IP Allocation Pool 192.168.0.101 - 192.168.0.200, Gateway 192.168.0.1)

and a demo network (CIDR 192.168.2.0/24, IP Allocation Pool 192.168.2.2 - 192.168.2.254, Gateway 192.168.2.1), which are connected via a router (interface router_gateway, IP 192.168.0.101, Status DOWN and Interface router_interface, IP 192.168.2.1, Status ACTIVE).

My Questions/Problems:

1. As the guide states, it should be possible to ping the router gateway (192.168.0.101), but I not able to do so. Wheres my mistake?

2. If I start an instance on the demo-net, I can see on the horizon dashboard that it gets an IP (e.g 192.168.2.16), but the instance itself (running CirrOS) is NOT able to do a dhcp discover and to pick up an IP.

Heres also the output from ovs-vsctl show:

a563624a-5fc4-49f7-969f-c574c3369393
    Bridge br-tun
        Port "gre-0b00011f"
            Interface "gre-0b00011f"
                type: gre
                options: {in_key=flow, local_ip="11.0.1.21", out_key=flow, remote_ip="11.0.1.31"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tapc295f082-be"
            tag: 1
            Interface "tapc295f082-be"
                type: internal
        Port "qr-f6ff0abb-f9"
            tag: 1
            Interface "qr-f6ff0abb-f9"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0:2"
            Interface "eth0:2"
        Port "qg-cc69748e-7d"
            Interface ...
(more)
2014-09-20 11:18:59 -0600 received badge  Popular Question (source)
2014-09-19 08:28:01 -0600 asked a question instances do not receive incoming traffic (no internet)

Hello there,

im now setting up openstack on a two-node cluster for days and encountered several problems. My current issue is that my launched instances have no internet connection. This is my setup:

2 Nodes

  1. Controller Node (one NIC, static ip: 192.168.0.100, keystone, swift, glance, ...)
  2. Compute Node (one NIC, static ip: 192.168.0.101, nova-compute, nova-network)

I am using nova-network (not neutron). I am able to launch an instance which gets a IP via DHCP. I can connect to the instance via SSH and I am able to ping to compute node from the instance, but I am not able to reach any server on the internet.

This are are nova.conf file on the compute node:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

auth_strategy = keystone

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = ***

my_ip = 192.168.0.101
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.0.101
novncproxy_base_url = http://controller:6080/vnc_auto.html

glance_host = controller

network_api_class = nova.network.api.API
security_group_api = nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
network_manager = nova.network.manager.FlatDHCPManager
network_size = 254
allow_same_net_traffic = False
multi_host = True
send_arp_for_ha = True
share_dhcp_address = True
force_dhcp_release = True
flat_network_bridge = br100
flat_interface = eth0
public_interface = eth0

[database]
connection=mysql://nova:***@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = ***

ifconfig on Compute Node:

br100     Link encap:Ethernet  HWaddr 00:50:8d:b0:ee:23  
          inet addr:203.0.113.1  Bcast:203.0.113.255  Mask:255.255.255.0
          inet6 addr: fe80::8ba:ecff:fe5b:3cbe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19528 errors:0 dropped:0 overruns:0 frame:0
          TX packets:41778 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2316658 (2.3 MB)  TX bytes:9146269 (9.1 MB)

eth0      Link encap:Ethernet  HWaddr 00:50:8d:b0:ee:23  
          inet6 addr: fe80::250:8dff:feb0:ee23/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21239 errors:0 dropped:44 overruns:0 frame:0
          TX packets:44098 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2870997 (2.8 MB)  TX bytes:9702860 (9.7 MB)
          Interrupt:18 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:528 (528.0 B)  TX bytes:528 (528.0 B)

virbr0    Link encap:Ethernet  HWaddr 42:b8:6e:3a ...
(more)
2014-09-13 10:07:44 -0600 received badge  Student (source)