Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

some compute nodes cannot get ip

Hello.

I'm installing Openstack but it has some issue with Liverty version.

I have 3 nodes. 2 compute node, 1 controller node&network node. nodes are physical servers.

There are the network interfaces of nodes

controller node : 2 interface(ens160:10.0.159.14-For gre communication & manage) (br-ex: 10.0.159.50-For external network )

First compute node - 1 interface(ens160:10.0.159.11)

Second compute node -1 interface(ens160:10.0.159.12)

and tenant network is GRE and provider network is flat network.

Problem :

one of computes node(it changed each other) can not get ip from dhcp. there is no error log and I checked packet flow through tcpdump but dhcp agent had gotten ip but the vm cannot get it.

one compute node can get ip from dhcp fastest, but the other cannot get ip. The node that can't get ip has changed whenever all nodes are rebooted.

I will upload my configuration. so please see and find out the solution .T_T

Thank you!


below is result of ip netns exec qdhcp-828b9ade-991d-470c-bda6-2b28e0005fd3 tcpdump -i tap58bb7631-6a

19:49:47.379241 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:0a:6a:5f (oui Unknown), length 290
19:49:47.379968 IP OPENSTACK.bootps > 192.168.20.25.bootpc: BOOTP/DHCP, Reply, length 312
19:49:47.380342 IP OPENSTACK.34417 > 10.0.4.31.domain: 28367+ PTR? 25.20.168.192.in-addr.arpa. (44)
19:49:47.382743 IP 10.0.4.31.domain > OPENSTACK.34417: 28367 NXDomain 0/1/0 (94)
19:49:48.039383 IP6 :: > ff02::1:ff0a:6a5f: ICMP6, neighbor solicitation, who has fe80::f816:3eff:fe0a:6a5f, length 24
19:49:48.039686 IP OPENSTACK.36019 > 10.0.4.31.domain: 18140+ PTR? f.5.a.6.a.0.f.f.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.f.f.ip6.arpa. (90)
19:49:48.204909 IP 10.0.4.31.domain > OPENSTACK.36019: 18140 NXDomain 0/1/0 (160)
19:49:48.205232 IP OPENSTACK.48926 > 10.0.4.31.domain: 1592+ PTR? f.5.a.6.a.0.e.f.f.f.e.3.6.1.8.f.0.0.0.0.0.0.0.0.0.0.0.0.0.8.e.f.ip6.arpa. (90)
19:49:48.207743 IP 10.0.4.31.domain > OPENSTACK.48926: 1592 NXDomain 0/1/0 (125)
19:49:49.039358 IP6 fe80::f816:3eff:fe0a:6a5f > ff02::2: ICMP6, router solicitation, length 16
19:49:49.867075 IP6 fe80::f816:3eff:fe0a:6a5f > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28
19:49:50.281388 IP6 fe80::54e7:6ff:fed8:7df6 > ff02::2: ICMP6, router solicitation, length 16
19:49:50.953376 IP6 fe80::d0be:25ff:fe6d:8bd3 > ff02::2: ICMP6, router solicitation, length 16
19:49:52.394237 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28
19:49:52.394241 ARP, Request who-has OPENSTACK tell 192.168.20.1, length 28
19:49:52.394264 ARP, Reply OPENSTACK is-at fa:16:3e:1f:c8:3a (oui Unknown), length 28
19:49:53.050998 IP6 fe80::f816:3eff:fe0a:6a5f > ff02::2: ICMP6, router solicitation, length 16
19:49:53.396238 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28
19:49:54.398240 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28
19:49:57.059054 IP6 fe80::f816:3eff:fe0a:6a5f > ff02::2: ICMP6, router solicitation, length 16
19:50:47.440324 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:0a:6a:5f (oui Unknown), length 290
19:50:47.441359 IP OPENSTACK.bootps > 192.168.20.25.bootpc: BOOTP/DHCP, Reply, length 312
19:50:52.442244 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28
19:50:53.444239 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28
19:50:54.446239 ARP, Request who-has 192.168.20.25 tell OPENSTACK, length 28

console messages

[    0.684112] cpuidle: using governor menu
[    0.685427] EFI Variables Facility v0.08 2004-May-17
[    0.687251] TCP cubic registered
[    0.688495] NET: Registered protocol family 10
[    0.691495] NET: Registered protocol family 17
[    0.693018] Registering the dns_resolver key type
[    0.694687] registered taskstats version 1
[    0.703655]   Magic number: 7:756:831
[    0.705126] rtc_cmos 00:01: setting system clock to 2015-11-25 10:49:46 UTC (1448448586)
[    0.707751] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[    0.709584] EDD information not available.
[    0.781955] Freeing unused kernel memory: 928k freed
[    0.784174] Write protecting the kernel read-only data: 12288k
[    0.790777] Freeing unused kernel memory: 1596k freed
[    0.797036] Freeing unused kernel memory: 1184k freed

info: initramfs: up at 0.80
GROWROOT: CHANGED: partition=1 start=16065 old: size=64260 end=80325 new: size=2072385,end=2088450
info: initramfs loading root from /dev/vda1
info: /etc/init.d/rc.sysinit: up at 1.13
info: container: none
Starting logging: OK
modprobe: module virtio_blk not found in modules.dep
modprobe: module virtio_net not found in modules.dep
WARN: /etc/rc3.d/S10-load-modules failed
Initializing random number generator... done.
Starting acpid: OK
cirros-ds 'local' up at 1.41
no results found for mode=local. up 1.49. searched: nocloud configdrive ec2
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...

below is controller node's ovs-vsctl show


> `9e87dcbd-33ea-403f-a679-8bcf4a946152

Bridge br-ex
    Port "enp19s0f0"
        Interface "enp19s0f0"
    Port br-ex
        Interface br-ex
            type: internal
    Port "qg-dfbe4a03-bc"
        Interface "qg-dfbe4a03-bc"
            type: internal
Bridge br-tun
    fail_mode: secure
    Port "gre-0a009f0b"
        Interface "gre-0a009f0b"
            type: gre
            options: {df_default="true", in_key=flow, local_ip="10.0.159.14", out_key=flow, remote_ip="10.0.159.11"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
    Port "gre-0a009f0c"
        Interface "gre-0a009f0c"
            type: gre
            options: {df_default="true", in_key=flow, local_ip="10.0.159.14", out_key=flow, remote_ip="10.0.159.12"}
Bridge br-int
    fail_mode: secure
    Port "tap58bb7631-6a"
        tag: 1
        Interface "tap58bb7631-6a"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qr-6ee78b7a-6d"
        tag: 1
        Interface "qr-6ee78b7a-6d"
            type: internal
    Port br-int
        Interface br-int
            type: internal
    Port int-br-ex
        Interface int-br-ex
            type: patch
            options: {peer=phy-br-ex}
ovs_version: "2.4.0" ##

and my ml2.ini

integration_bridge = br-int

# Only used for the agent if tunnel_id_ranges is not empty for
# the server.  In most cases, the default value should be fine.
#
# tunnel_bridge = br-tun
tunnel_bridge = br-tun

# Peer patch port in integration bridge for tunnel bridge
# int_peer_patch_port = patch-tun

# Peer patch port in tunnel bridge for integration bridge
# tun_peer_patch_port = patch-int

# Uncomment this line for the agent if tunnel_id_ranges is not
# empty for the server. Set local-ip to be the local IP address of
# this hypervisor.
#
# local_ip =
local_ip =10.0.159.14

# (ListOpt) Comma-separated list of <physical_network>:<bridge> tuples
# mapping physical network names to the agent's node-specific OVS
# bridge names to be used for flat and VLAN networks. The length of
# bridge names should be no more than 11. Each bridge must
# exist, and should have a physical network interface configured as a
# port. All physical networks configured on the server should have
# mappings to appropriate bridges on each agent.
#
# Note: If you remove a bridge from this mapping, make sure to disconnect it
# from the integration bridge as it won't be managed by the agent anymore.
#
# bridge_mappings =
# Example: bridge_mappings = physnet1:br-eth1

bridge_mappings = public:br-ex

# (BoolOpt) Use veths instead of patch ports to interconnect the integration
# bridge to physical networks. Support kernel without ovs patch port support
# so long as it is set to True.
# use_veth_interconnection = False

# (StrOpt) Which OVSDB backend to use, defaults to 'vsctl'
# vsctl - The backend based on executing ovs-vsctl
# native - The backend based on using native OVSDB
# ovsdb_interface = vsctl

# (StrOpt) The connection string for the native OVSDB backend
# To enable ovsdb-server to listen on port 6640:
#   ovs-vsctl set-manager ptcp:6640:127.0.0.1
# ovsdb_connection = tcp:127.0.0.1:6640

# (StrOpt) OpenFlow interface to use.
# 'ovs-ofctl' or 'native'.
# of_interface = ovs-ofctl
#
# (IPOpt)
# Address to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_address = 127.0.0.1
#
# (IntOpt)
# Port to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_port = 6633
#
# (IntOpt)
# Timeout in seconds to wait for the local switch connecting the controller.
# Used only for 'native' driver.
# of_connect_timeout=30
#
# (IntOpt)
# Timeout in seconds to wait for a single OpenFlow request.
# Used only for 'native' driver.
# of_request_timeout=10

# (StrOpt) ovs datapath to use.
# 'system' is the default value and corresponds to the kernel datapath.
# To enable the userspace datapath set this value to 'netdev'
# datapath_type = system
enable_tunneling=True

[agent]
# Log agent heartbeats from this OVS agent
# log_agent_heartbeats = False

# Agent's polling interval in seconds
# polling_interval = 2
polling_interval = 2

# Minimize polling by monitoring ovsdb for interface changes
# minimize_polling = True

# When minimize_polling = True, the number of seconds to wait before
# respawning the ovsdb monitor after losing communication with it
# ovsdb_monitor_respawn_interval = 30

# (ListOpt) The types of tenant network tunnels supported by the agent.
# Setting this will enable tunneling support in the agent. This can be set to
# either 'gre' or 'vxlan'. If this is unset, it will default to [] and
# disable tunneling support in the agent.
# You can specify as many values here as your compute hosts supports.
#
# tunnel_types =
tunnel_types =gre
# Example: tunnel_types = gre
# Example: tunnel_types = vxlan
# Example: tunnel_types = vxlan, gre

# (IntOpt) The port number to utilize if tunnel_types includes 'vxlan'. By
# default, this will make use of the Open vSwitch default value of '4789' if
# not specified.
#
# vxlan_udp_port =
# Example: vxlan_udp_port = 8472

# (IntOpt) This is the MTU size of veth interfaces.
# Do not change unless you have a good reason to.
# The default MTU size of veth interfaces is 1500.
# This option has no effect if use_veth_interconnection is False
# veth_mtu =
# Example: veth_mtu = 1504

# (BoolOpt) Flag to enable l2-population extension. This option should only be
# used in conjunction with ml2 plugin and l2population mechanism driver. It'll
# enable plugin to populate remote ports macs and IPs (using fdb_add/remove
# RPC calbbacks instead of tunnel_sync/update) on OVS agents in order to
# optimize tunnel management.
#
# l2_population = False
l2_population = False

# Enable local ARP responder. Requires OVS 2.1. This is only used by the l2
# population ML2 MechanismDriver.
#
# arp_responder = False
arp_responder = False

# Enable suppression of ARP responses that don't match an IP address that
# belongs to the port from which they originate.
# Note: This prevents the VMs attached to this agent from spoofing,
# it doesn't protect them from other devices which have the capability to spoof
# (e.g. bare metal or VMs attached to agents without this flag set to True).
# Requires a version of OVS that can match ARP headers.
#
# prevent_arp_spoofing = True
prevent_arp_spoofing = True

# (BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is True.
#
# dont_fragment = True

# (BoolOpt) Set to True on L2 agents to enable support
# for distributed virtual routing.
#
# enable_distributed_routing = False
enable_distributed_routing = False

# (IntOpt) Set new timeout in seconds for new rpc calls after agent receives
# SIGTERM. If value is set to 0, rpc timeout won't be changed"
#
# quitting_rpc_timeout = 10

# (ListOpt) Extensions list to use
# Example: extensions = qos
#
# extensions =

# (BoolOpt) Set or un-set the checksum on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is False.
#
# tunnel_csum = False

# (StrOpt) agent_type to report.
# This config entry allows configuration of the neutron agent type reported
# by the default ovs l2 agent. This allows multiple ovs mechanism drivers
# to share a common ovs agent implementation. NOTE: this value will be
# removed in the mitaka cycle.
#
# agent_type = 'Open vSwitch agent'
drop_flows_on_start=False

[securitygroup]
# Firewall driver for realizing neutron security group function.
# firewall_driver = neutron.agent.firewall.NoopFirewallDriver
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
# Example: firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True