Icehouse instances don't get IP [closed]

asked 2014-05-14 00:50:41 -0600

tahder gravatar image

updated 2014-05-14 00:51:14 -0600

I am having problems with my instances which can't get an Ip from the instances.

My configuration is using OpenVSwitch and VLAN with with 3 nodes. Node1 -> Controller node and the neutron server reside Node2 -> Network Node (DHCP-Agent,L3-Router, metadata are running here) Node3 -> Purely a compute node

I tried different configurations but seems not working for me, the vlan is having the big issue on mine, but last time when tried a nova-compute in the Node2, the VMs on nodes2 only got the IPs.

Except when I create the network which can get IP of the Instances from Node3.

neutron net-create --tenant-id xxxx privatenet1 --shared --provider:network_type flat --provider:physical_network physnet1

neutron subnet-create --tenant-id xxxx privatenet1 192.168.100.0/24

So far don't get any errors on the logs, I suspect there are blocking on the configurations. Even tried to separate the management NICS and the instances NICs to different physical switch but still no avail.

Hope someone can give hints on the troubleshooting such problems.

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by rbowen
close date 2016-06-21 14:07:19.671739

Comments

Please run :-
$ ip netns list
Select qdhcp-network-id for privatenet1 ID.
$ ip netns exec qdhcp-network-id ifconfig
which supposed to provide an interface like ns-zzzzzzz or tap-zzzzzzz with IP on private subnet
$ ip netns exec qdhcp-network-id tcpdump -ln -i tap-zzzzzzzz
watch for DHCP requests && replies capturing by tcpdump.
Actually, same information will be provided by /var/log/neutron/dnsmasq.log on Server running dnsmasq supporting your private subnet.
In your config Node2.
I just never split Neutron Server && Agents .
Anyway dnsmasq has to run on either Node1 or Node2

dbaxps gravatar imagedbaxps ( 2014-05-14 02:03:26 -0600 )edit

That's a sample :-

    [root@dfw02 ~(keystone_andrew)]$ ip netns exec qdhcp-971fd74b-66e4-4016-b997-433e124a22ad netstat -antp
Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 10.0.0.12:53            0.0.0.0:*               LISTEN      4479/dnsmasq        
    tcp6       0      0 fe80::f816:3eff:fe75:53 :::*                    LISTEN      4479/dnsmasq        
    [root@dfw02 ~(keystone_andrew)]$ ip netns exec qdhcp-971fd74b-66e4-4016-b997-433e124a22ad ifconfiglo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 0  (Local Loopback)
            RX packets 1  bytes 576 (576.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1  bytes 576 (576.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    ns-a2f724d1-74: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.0.0.12  netmask 255.255.255.0  broadcast 10 ...
(more)
dbaxps gravatar imagedbaxps ( 2014-05-14 02:22:12 -0600 )edit

dbaxps yes I got this on the instance where neutron - dhcp agent is located in Node2.

ip netns exec qdhcp-78975c08-1440-4109-88b3-ef212a872b09 tcpdump -ln -i tape5241ac2-c7 for more details in link text

tahder gravatar imagetahder ( 2014-05-14 18:49:55 -0600 )edit

It seems it's working on node2, but doesn't give results to the other nodes like the node3 that acts as the compute node.

Maybe I will also give a try not to separate the Neutron Server to the rest of neutron services and it will be at node2.

tahder gravatar imagetahder ( 2014-05-14 18:53:35 -0600 )edit

I got this

11:42:08.342311 IP 192.168.101.3.domain > 192.168.101.2.57674: 2096 Refused 0/0/0 (44)
11:42:08.342330 IP 192.168.101.3.domain > 192.168.101.2.57674: 10365 Refused 0/0/0 (44)
dbaxps gravatar imagedbaxps ( 2014-05-14 21:39:40 -0600 )edit