Ask Your Question
0

DNSMASQ-DHCP not releasing IPs after VM deletion

asked 2017-01-10 12:29:12 -0600

I am running a all-in-one lab of Newton, Ubuntu distro. When a VM gets deleted, there is a dhcp_release message being sent from neutron-dhcp-agent to dnsmasq-dhcp:

2017-01-10 13:23:53.170 5107 DEBUG neutron.agent.linux.utils [req-1d95b0e0-cb91-4980-8a94-1fd9b0490dc7 d06814ed24a644a1bc8f78bca2b8534c b9cb24aed8c54a638d2f95c7c97574d7 - - -] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'qdhcp-56b9e9d3-36d3-4d97-b99e-ecdae7b1491a', 'dhcp_release', 'ns-ae0d4eab-07', '10.2.1.64', 'fa:16:3e:15:79:92'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:89

However, this message is lost and does not arrive to the dnsmasq-dhcp daemon:

Jan 10 13:23:48 rel-openstack1 dnsmasq-dhcp[6034]: DHCPDISCOVER(ns-ae0d4eab-07) 1c:c1:de:be:54:d6 no address available Jan 10 13:23:51 rel-openstack1 dnsmasq-dhcp[6034]: read /var/lib/neutron/dhcp/56b9e9d3-36d3-4d97-b99e-ecdae7b1491a/host Jan 10 13:23:55 rel-openstack1 dnsmasq-dhcp[6034]: DHCPDISCOVER(ns-ae0d4eab-07) 78:e7:d1:be:83:e6 no address available

See the time intervals. See the networking configuration of the box:

root@rel-openstack1:~# ip a 1: lo: <loopback,up,lower_up> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0f0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq master brq56b9e9d3-36 state UP group default qlen 1000 link/ether 64:31:50:4a:13:38 brd ff:ff:ff:ff:ff:ff inet6 fe80::6631:50ff:fe4a:1338/64 scope link valid_lft forever preferred_lft forever 3: enp3s0f1: <broadcast,multicast> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 64:31:50:4a:13:3a brd ff:ff:ff:ff:ff:ff 4: enp4s0f0: <broadcast,multicast> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 64:31:50:4a:13:3c brd ff:ff:ff:ff:ff:ff 5: enp4s0f1: <broadcast,multicast> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 64:31:50:4a:13:3e brd ff:ff:ff:ff:ff:ff 6: ens1f0: <broadcast,multicast> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 1c:c1:de:7a:cf:c8 brd ff:ff:ff:ff:ff:ff 7: ens1f1: <broadcast,multicast> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 1c:c1:de:7a:cf:ca brd ff:ff:ff:ff:ff:ff 8: brq56b9e9d3-36: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 64:31:50:4a:13:38 brd ff:ff:ff:ff:ff:ff inet 10.2.1.60/24 brd 10.2.1.255 scope global brq56b9e9d3-36 valid_lft forever preferred_lft forever 9: tapae0d4eab-07@if2: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue master brq56b9e9d3-36 state UP group default qlen 1000 link/ether ba:ed:6e:8c:1c:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0 root@rel-openstack1:~# ip netns qdhcp-56b9e9d3-36d3-4d97-b99e-ecdae7b1491a (id: 0) root@rel-openstack1:~# ip netns exec qdhcp-56b9e9d3-36d3-4d97-b99e-ecdae7b1491a ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-01-10 14:13:16 -0600

dbaxps gravatar image

updated 2017-01-10 14:20:08 -0600

See https://ask.openstack.org/en/question...
In particular, mine feedback :-

import logging
from neutronclient.neutron import client

auth_url = "http://192.168.33.11:5000/v2.0"

logging.basicConfig(level=logging.DEBUG)
neutron = client.Client('2.0', username="demo_admin", password="password", tenant_name="demo", auth_url=auth_url)

print "".join(neutron.list_subnets())

req = {"subnet": {"allocation_pools": [{"start": "10.0.2.3", "end": "10.0.2.15"}, {"start": "10.0.2.17", "end": "10.0.2.17"}, {"start": "10.0.2.19", "end": "10.0.2.254"}]}}

neutron.update_subnet("d5d48930-7bfb-4f0c-8968-13f8af785868", req)

I mean if project is "demo" ( for instance ) you should need credentials of admin for particular project.

Keystone API version is supposed to be 2.0 , otherwise see CLI approach , but I still believe you would need to source credentials
of admin for particular project (tenant )

See original source code at http://samos-it.com/posts/neutron-mul...

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-01-10 12:29:12 -0600

Seen: 21 times

Last updated: Jan 10