Ask Your Question
0

Instances doesn't get IP

asked 2018-07-02 03:00:24 -0600

Sideris gravatar image

updated 2018-07-02 05:08:05 -0600

Hi all,

I meet a problem with my Openstack infrastructure. It's an offline infrastructure, the controler node and compute doesn't have access to Internet. I follow this documentation for deployement (Queens Release): https://docs.openstack.org/install-guide/openstack-services.html (https://docs.openstack.org/install-gu...)

I follow all instruction for the installation of Keystone, glance, nova and neutron. After when I tried to deploy an instance, my instance doesn't get IP address. No error in log. In /var/lib/neutron/dhcp I see the IP address reserved for the instance.

I can join all folder you want for help me. First this is my configuration:

Controler: 2 interface:

2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
3: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21a:64ff:fed0:34c0/64 scope link
       valid_lft forever preferred_lft forever
4: enp4s0.748@enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:be brd ff:ff:ff:ff:ff:ff
    inet 10.26.9.19/25 brd 10.26.9.127 scope global enp4s0.748
       valid_lft forever preferred_lft forever
    inet6 fe80::21a:64ff:fed0:34be/64 scope link
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:26:3c:f1:3d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
8: tap39c1a9de-db@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether 56:a8:df:50:99:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: tapd75c2984-2e@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 7a:da:65:b4:1b:d0 brd ff:ff:ff:ff:ff:ff link-netnsid 1
10: tap475e0c42-1d@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq99275f2e-5a state UP group default qlen 1000
    link/ether ca:71:3f:b1:e0:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
11: brqa7f0e57c-d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:64:d0:34:c0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.20/24 brd 192.168.10.255 scope global brqa7f0e57c-d0
       valid_lft forever preferred_lft forever
    inet6 fe80::189c:87ff:fec0:f839/64 scope link
       valid_lft forever preferred_lft forever
12: tapa9663a5e-cd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brqa7f0e57c-d0 state UP group default qlen 1000
    link/ether 76:96 ...
(more)
edit retag flag offensive close merge delete

Comments

Can you share:

  • ML2 config
  • details of networks to which the instances are connected

You could trace DHCP requests at different points between instance and dnsmasq process and find out where they (or DHCP replies) disappear.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 06:31:18 -0600 )edit

I answer you in the next answer post .

Sideris gravatar imageSideris ( 2018-07-02 08:00:24 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-07-02 07:59:52 -0600

Sideris gravatar image

updated 2018-07-05 02:14:51 -0600

This is ml2 configuration for controler node: ml2_conf.ini

[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true

linuxbridge_agent.ini:

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.20
l2_population = true

For the compute node linuxbridge_agent.ini

[DEFAULT]
debug = True
[agent]
[linux_bridge]
physical_interface_mappings = provider:enp6s0
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 192.168.10.21
l2_population = true

I trace DHCP request of the instance, i make tcpdump in the DHCP namespace, router namespace, controler and compute node I see the DHCP request going to DHCP namespace, but no return. I can follow you wireshark capture.

Thanks

UPDATE 05 July 2018

I think the problem was a desynchronization between controler node and compute. I launch the neutron-bridge-agent on compute node and after i launch this commande on controleur node:

/etc/init.d/neutron-linuxbridge-cleanup restart ; /etc/init.d/neutron-linuxbridge-agent restart

And Magic the problem was solved !

edit flag offensive delete link more

Comments

is dhcp enabled in your subnet?

control:~ #  openstack subnet show <SUBNET_ID> | grep dhcp
| enable_dhcp       | True                                 |
eblock gravatar imageeblock ( 2018-07-02 09:37:21 -0600 )edit

Hi,

Yes the dhcp is enable on the subnet.

Sideris gravatar imageSideris ( 2018-07-02 09:43:41 -0600 )edit

When you set the address manually in the instance, do you have connectivity? And connectivity from the DHCP namespace to the instance?

If yes, something is wrong with DHCP. Your tracing results indicate that as well.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 15:56:34 -0600 )edit

Another idea: I once had problems with a cloud deployed on Virtualbox VMs. Traffic to the instances running on a Vbox compute node was swallowed somewhere (memory is hazy).

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 15:59:38 -0600 )edit

Other things to check: Are all agents up, in particular the DHCP agent(s), and what does your dhcp config look like.

Bernd Bausch gravatar imageBernd Bausch ( 2018-07-02 18:10:39 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-07-02 03:00:24 -0600

Seen: 275 times

Last updated: Jul 05