Ask Your Question
0

Instances doesn't get IP

asked 2018-07-02 03:00:24 -0500

Sideris gravatar image

updated 2018-07-02 05:08:05 -0500

Hi all,

I meet a problem with my Openstack infrastructure. It's an offline infrastructure, the controler node and compute doesn't have access to Internet. I follow this documentation for deployement (Queens Release): https://docs.openstack.org/install-guide/openstack-services.html (https://docs.openstack.org/install-gu...)

I follow all instruction for the installation of Keystone, glance, nova and neutron. After when I tried to deploy an instance, my instance doesn't get IP address. No error in log. In /var/lib/neutron/dhcp I see the IP address reserved for the instance.

I can join all folder you want for help me. First this is my configuration:

Controler: 2 interface:

2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34c0/64 scope link
       valid_lft forever preferred_lft forever
4: enp4s0.748@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet 10.26.9.19/25 brd 10.26.9.127 scope global enp4s0.748
       valid_lft forever preferred_lft forever
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:26:3c:f1:3d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
8: tap39c1a9de-db@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether 56:a8:df:50:99:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: tapd75c2984-2e@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 7a:da:65:b4:1b:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tap475e0c42-1d@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether ca:71:3f:b1:e0:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
11: brqa7f0e57c-d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.20/24 brd 192.168.10.255 scope global brqa7f0e57c-d0
       valid_lft forever preferred_lft forever
    inet6 fe80::189c:87ff:fec0:f839/64 scope link
       valid_lft forever preferred_lft forever
12: tapa9663a5e-cd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 76:96 ...
(more)
edit retag flag offensive close merge delete

Comments

Can you share:

  • ML2 config
  • details of networks to which the instances are connected

You could trace DHCP requests at different points between instance and dnsmasq process and find out where they (or DHCP replies) disappear.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 06:31:18 -0500 )edit

I answer you in the next answer post .

Sideris gravatar imageSideris ( 2018-07-02 08:00:24 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-07-02 07:59:52 -0500

Sideris gravatar image

updated 2018-07-05 02:14:51 -0500

This is ml2 configuration for controler node: ml2_conf.ini

[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

linuxbridge_agent.ini:

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.20
l2_population = true

For the compute node linuxbridge_agent.ini

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.21
l2_population = true

I trace DHCP request of the instance, i make tcpdump in the DHCP namespace, router namespace, controler and compute node I see the DHCP request going to DHCP namespace, but no return. I can follow you wireshark capture.

Thanks

UPDATE 05 July 2018

I think the problem was a desynchronization between controler node and compute. I launch the neutron-bridge-agent on compute node and after i launch this commande on controleur node:

/etc/init.d/neutron-linuxbridge-cleanup restart ; /etc/init.d/neutron-linuxbridge-agent restart

And Magic the problem was solved !

edit flag offensive delete link more

Comments

is dhcp enabled in your subnet?

control:~ #  openstack subnet show <SUBNET_ID> | grep dhcp
| enable_dhcp       | True                                 |
eblock gravatar imageeblock ( 2018-07-02 09:37:21 -0500 )edit

Hi,

Yes the dhcp is enable on the subnet.

Sideris gravatar imageSideris ( 2018-07-02 09:43:41 -0500 )edit

When you set the address manually in the instance, do you have connectivity? And connectivity from the DHCP namespace to the instance?

If yes, something is wrong with DHCP. Your tracing results indicate that as well.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 15:56:34 -0500 )edit

Another idea: I once had problems with a cloud deployed on Virtualbox VMs. Traffic to the instances running on a Vbox compute node was swallowed somewhere (memory is hazy).

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 15:59:38 -0500 )edit

Other things to check: Are all agents up, in particular the DHCP agent(s), and what does your dhcp config look like.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 18:10:39 -0500 )edit

Hi,

First thanks for all propositions All nova agents are UP and Enable My Openstack infrastructure run on physical architecture, and use KVM

For my test i use cirros 0.4 image, but when i use vnc to connect on it, the keyboard is in not on QWERTY or AZERTY, so i can not edit configuration.

Sideris gravatar imageSideris ( 2018-07-03 02:14:37 -0500 )edit

Do you have a tips for change cirros keyboard ? after this i will try to configure network on my instance and try to communicate over the network.

Regards, Thx

Sideris gravatar imageSideris ( 2018-07-03 02:15:31 -0500 )edit

You only mention nova agents, what about neutron? openstack network agent list

eblock gravatar imageeblock ( 2018-07-03 02:27:50 -0500 )edit

Neutron agent are UP and enable too

Sideris gravatar imageSideris ( 2018-07-03 02:49:18 -0500 )edit

Ok i found a solution for configure network interface for the instance, i use virsh. I configure instance. My private network is 172.16.1.0/24, i've got a router with 2 interfaces 192.168.10.102 and 172.16.1.1 I configure my cirros host with ip 172.16.1.9, but can't ping router Any ideas?

Sideris gravatar imageSideris ( 2018-07-03 03:42:57 -0500 )edit

You seem to have no problems using tcpdump :) Find out where the packets disappear between instance and router.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-03 04:37:03 -0500 )edit

I'm not an expert of TCPDUMP I record on bridge controler, bridge compute, interface of router namespace and interface of dhcp namespace. I see the DHCP request going to dhcp namespace, and i see the offer by dhcp namespace.

But i didn't see the return of this request.

Sideris gravatar imageSideris ( 2018-07-03 06:02:01 -0500 )edit

Is the configuration of my bridge on my compute node is good ?

I follow the same documentation last month, and it was work, today not. Thx

Sideris gravatar imageSideris ( 2018-07-03 06:02:56 -0500 )edit

Is it normal if my namespace dhcp can't ping my compute node ? but can ping my controler node which is in the same network of my compute?

Sideris gravatar imageSideris ( 2018-07-03 07:48:07 -0500 )edit

So i install a compute service on my controler, and it's worked

The problem come from the compute node, i don't know how debug this; Someone can help me plz

Sideris gravatar imageSideris ( 2018-07-03 08:45:04 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-07-02 03:00:24 -0500

Seen: 1,552 times

Last updated: Jul 05 '18