Ask Your Question

Erik Launay's profile - activity

2019-05-17 01:52:30 -0500 received badge  Notable Question (source)
2019-05-17 01:52:30 -0500 received badge  Famous Question (source)
2019-04-04 14:38:37 -0500 received badge  Popular Question (source)
2019-04-03 23:51:23 -0500 answered a question dnsmasq-dhcp - DHCPDISCOVER - no address available

To Summarize:

  • I have 2 flat network: provider & provider 3
  • my instance gets an IP@ if I use the provider network / DHCP
  • my instance doesn't get anny IP@ if I user provider 3 network

Please see my configuration / settings in the original question

Here is the result of my investigation:

With provider network:

I have in /var/log/messages
DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:fd:0e:ff
DHCPDISCOVER(ns-e02a794c-0d) fa:16:3e:fd:0e:ff no address available
DHCPOFFER(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff
DHCPREQUEST(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff
DHCPACK(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff host-10-0-0-162

Using TCPDUMP on the provider network, I'll see only
DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:fd:0e:ff
DHCPOFFER(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff
DHCPREQUEST(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff
DHCPACK(ns-b5b77346-88) 10.0.0.162 fa:16:3e:fd:0e:ff host-10-0-0-162
=>As you can see, I don't have the "DHCPDISCOVER(ns-e02a794c-0d) fa:16:3e:fd:0e:ff no address available" (which is the DHCP answer for the provider3 network)

From the instance log:
dhcpc (v1.20.1) started
Sending discover...
Sending select for 10.0.0.162...
Lease of 10.0.0.162 obtained, lease time 86400

With provider3 network:

I have in /var/log/messages
DHCPDISCOVER(ns-e02a794c-0d) fa:16:3e:44:5f:32
DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:44:5f:32 no address available
DHCPOFFER(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32
DHCPREQUEST(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32
DHCPACK(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32 host-10-100-100-151

=> same behavior as provider network, except this time the DHCP for the provider network says no address available (which is expected)

Using TCPDUMP on the provider network, I'll see
DHCPDISCOVER(ns-e02a794c-0d) fa:16:3e:44:5f:32
DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:44:5f:32 no address available
DHCPOFFER(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32
DHCPREQUEST(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32
DHCPACK(ns-e02a794c-0d) 10.100.100.151 fa:16:3e:44:5f:32 host-10-100-100-151

=> here I see the NAK and therefore my instance thinks there is no address available and of course the address is not assign to the instance.

From the instance log:
Sending discover...
Sending select for 10.100.100.151...
Received DHCP NAK

Questions: Why I see the NAK for provider3 on the provider3 network and not for the provider network, where do I tune this? what I am doing wrong?

2019-04-03 19:34:44 -0500 commented question dnsmasq-dhcp - DHCPDISCOVER - no address available

I'm not sure who is "ns-b5b77346-88" answering "no address available"

in fact when you look at the exchange, everything seems ok until "ns-b5b77346-88" says "no address available"

I need to find out who is ns-b5b77346-88

2019-04-03 19:23:55 -0500 commented question dnsmasq-dhcp - DHCPDISCOVER - no address available
  1. Neutron proposed 10.100.100.157

  2. The DHCP offers 10.100.100.157

  3. The DHCP says "no address available"

2019-04-03 19:20:39 -0500 commented question dnsmasq-dhcp - DHCPDISCOVER - no address available

….

DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:98:e7:3c no address available

...

2019-04-03 19:20:31 -0500 commented question dnsmasq-dhcp - DHCPDISCOVER - no address available

DHCPDISCOVER(ns-e02a794c-0d) fa:16:3e:98:e7:3c

DHCPOFFER(ns-e02a794c-0d) 10.100.100.157 fa:16:3e:98:e7:3c

DHCPREQUEST(ns-e02a794c-0d) 10.100.100.157 fa:16:3e:98:e7:3c

DHCPACK(ns-e02a794c-0d) 10.100.100.157 fa:16:3e:98:e7:3c host-10-100-100-157

...

2019-04-03 19:20:08 -0500 commented question dnsmasq-dhcp - DHCPDISCOVER - no address available

Hi

Neutron did assign 10.100.100.157 (confirmed in the dashboard Compute -> Instances) to my instance.

From /var/log/message:

2019-04-02 19:04:00 -0500 answered a question dnsmasq-dhcp - DHCPDISCOVER - no address available

Anyone please? I would really appreciate some help. Thanks

2019-04-02 02:29:42 -0500 asked a question dnsmasq-dhcp - DHCPDISCOVER - no address available

Hi

I have been looking all day for hints, people who may have solved this issue....

Anyway, here is my issue: I have 2 flat network (provider and provider3), DHCP works perfectly for one of them (provider) and always answer "no address available" for the other one (provider3).....

On the compute node, linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:em4,provider3:em3

On the Controller / Networking node:

[linux_bridge]
physical_interface_mappings = provider:p2p1,provider3:p2p2

As I just said, on the provider network, I can create an instance, get an IP@ through the DHCP and everything works just fine However, on the provider3, I always get:

Apr  1 22:54:05 networkingnode dnsmasq-dhcp[6803]: DHCPREQUEST(ns-e02a794c-0d) 10.100.100.157 fa:16:3e:98:e7:3c
Apr  1 22:54:05 networkingnode dnsmasq-dhcp[6803]: DHCPACK(ns-e02a794c-0d) 10.100.100.157 fa:16:3e:98:e7:3c host-10-100-100-157
Apr  1 22:54:08 networkingnode dnsmasq-dhcp[6801]: DHCPDISCOVER(ns-b5b77346-88) fa:16:3e:98:e7:3c no address available

A few commands seems to show me that the system is ok:

[root@networkingnode ~]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+----------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host           | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------+-------------------+-------+----------------+---------------------------+
| 67cf9ef3-6613-4d49-b781-550d7c1eff31 | Linux bridge agent | computenode    |                   | :-)   | True           | neutron-linuxbridge-agent |
| a0c0fbbe-ed50-4233-95cc-a8acfbe2ad86 | L3 agent           | networkingnode | nova              | :-)   | True           | neutron-l3-agent          |
| c9a98eff-efeb-4a97-9a2e-ecb48f0760fa | Linux bridge agent | networkingnode |                   | :-)   | True           | neutron-linuxbridge-agent |
| e7d005a2-1987-4d6d-a45c-fc6b21292e2c | NIC Switch agent   | computenode    |                   | :-)   | True           | neutron-sriov-nic-agent   |
| eba6a2c0-229c-4ab2-be5d-0975c2a45e3f | Metadata agent     | networkingnode |                   | :-)   | True           | neutron-metadata-agent    |
| f6443b89-96a1-4c43-a275-86a6496e9445 | DHCP agent         | networkingnode | nova              | :-)   | True           | neutron-dhcp-agent        |

Neutron net-list:

[root@networkingnode faa8349f-a940-40c9-acf0-2ee11ae9cd2b]# neutron net-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+
| id                                   | name       | tenant_id                        | subnets                                              |
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+
| d3a32ef4-8ec6-46cf-b9c7-6bc2c41d01d0 | net_on_em3 | a72a5d6b06d14b63acec9774146b0f6e | 3b317f95-3db7-479d-8c60-7446e5928ed8 10.100.100.0/24 |
| faa8349f-a940-40c9-acf0-2ee11ae9cd2b | iapcf_net  | a72a5d6b06d14b63acec9774146b0f6e | 9f517471-5b10-4cc0-90ba-fd7520ceb01b 10.0.0.0/24     |
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+

Neutron subnet-list:

[root@networkingnode ~]# neutron subnet-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------------+----------------------------------+-----------------+------------------------------------------------------+
| id                                   | name       | tenant_id                        | cidr            | allocation_pools                                     |
+--------------------------------------+------------+----------------------------------+-----------------+------------------------------------------------------+
| 3b317f95-3db7-479d-8c60-7446e5928ed8 | net_on_em3 | 18c0e75179014830a7ca2375981a205f | 10.100.100.0/24 | {"start": "10.100.100.150", "end": "10.100.100.250"} |
| 9f517471-5b10-4cc0-90ba-fd7520ceb01b | iapcf_net  | 18c0e75179014830a7ca2375981a205f | 10.0.0.0/24     | {"start": "10.0.0.150", "end": "10.0.0.250"}         |
+--------------------------------------+------------+----------------------------------+-----------------+------------------------------------------------------+

ip netns list:

[root@networkingnode ~]# ip netns list
qdhcp-d3a32ef4-8ec6-46cf-b9c7-6bc2c41d01d0 (id: 1)
qdhcp-faa8349f-a940-40c9-acf0-2ee11ae9cd2b (id: 0)

[root@networkingnode faa8349f-a940-40c9-acf0-2ee11ae9cd2b]# neutron dhcp-agent-list-hosting-net iapcf_net
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------+----------------+-------+
| id                                   | host           | admin_state_up | alive |
+--------------------------------------+----------------+----------------+-------+
| f6443b89-96a1-4c43-a275-86a6496e9445 | networkingnode | True           | :-)   |
+--------------------------------------+----------------+----------------+-------+

[root@networkingnode faa8349f-a940-40c9-acf0-2ee11ae9cd2b]# neutron net-list-on-dhcp-agent f6443b89-96a1-4c43-a275-86a6496e9445
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+
| id                                   | name       | tenant_id                        | subnets                                              |
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+
| d3a32ef4-8ec6-46cf-b9c7-6bc2c41d01d0 | net_on_em3 | a72a5d6b06d14b63acec9774146b0f6e | 3b317f95-3db7-479d-8c60-7446e5928ed8 10.100.100.0/24 |
| faa8349f-a940-40c9-acf0-2ee11ae9cd2b | iapcf_net  | a72a5d6b06d14b63acec9774146b0f6e | 9f517471-5b10-4cc0-90ba-fd7520ceb01b 10.0.0.0/24     |
+--------------------------------------+------------+----------------------------------+------------------------------------------------------+

[root@networkingnode faa8349f-a940-40c9-acf0-2ee11ae9cd2b]# neutron agent-show f6443b89-96a1-4c43-a275-86a6496e9445
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+---------------------+----------------------------------------------------------+
| Field               | Value                                                    |
+---------------------+----------------------------------------------------------+
| admin_state_up      | True                                                     |
| agent_type          | DHCP agent                                               |
| alive               | True                                                     |
| availability_zone   | nova                                                     |
| binary              | neutron-dhcp-agent                                       |
| configurations      | {                                                        |
|                     |      "subnets": 2,                                       |
|                     |      "dhcp_lease_duration": 86400,                       |
|                     |      "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",  |
|                     |      "networks": 2,                                      |
|                     |      "log_agent_heartbeats": false,                      |
|                     |      "ports": 3                                          |
|                     | }                                                        |
| created_at          | 2018-06-20 21:26:29                                      |
| description         |                                                          |
| heartbeat_timestamp | 2019-04-02 05:43:31                                      |
| host                | networkingnode                                           |
| id                  | f6443b89-96a1-4c43-a275-86a6496e9445                     |
| started_at          | 2019-04-02 ...
(more)
2018-09-26 01:01:06 -0500 received badge  Student (source)
2018-09-26 01:00:31 -0500 marked best answer Heat::ResourceGroup

Hi

I have been using HEAT template to create 3 servers and it works just fine

Example:

  ALIS-MF-01:
    type: OS::Nova::Server
    properties:
      name: ALIS-MF-01
      …
       template: {get_file: alis-mf-script.sh }
          params:
            …
            $alis_mf_hostname: { get_param: alis_mf_01_hostname }
            ...

  ALIS-MF-02:
    type: OS::Nova::Server
    properties:
      name: ALIS-MF-02
      …
       template: {get_file: alis-mf-script.sh }
          params:
            …
            $alis_mf_hostname: { get_param: alis_mf_02_hostname }
            ...

  ALIS-MF-03:
    type: OS::Nova::Server
    properties:
      name: ALIS-MF-03
      …
       template: {get_file: alis-mf-script.sh }
          params:
            …
            $alis_mf_hostname: { get_param: alis_mf_03_hostname }
            ...

Which as I said works just fine. Note: alis_mf_01_hostname, alis_mf_02_hostname and alis_mf_03_hostname are provided by my environment file

Now I would like to use Heat::ResourceGroup

  alis_af_rg:
    type: OS::Heat::ResourceGroup
    properties:
      count: { get_param: alis_af_count }
      resource_def:
        type: aqsa_alis_af.yaml
        properties:
           …
          alis_mf_hostname: { get_param: alis_mf_hostname }

The question is, before using Nova:Server, I could distinct the hostname as I was creating 3 different instances (alis_mf_01_hostname, alis_mf_02_hostname and alis_mf_03_hostname)

How can I provide a different hostname using ResourceGroup?

Anybody please?

Just for the story: I already did my research here and couldn't any clue.

2018-09-26 01:00:28 -0500 received badge  Teacher (source)
2018-09-26 01:00:28 -0500 received badge  Self-Learner (source)
2018-09-24 04:06:48 -0500 received badge  Famous Question (source)
2018-09-14 03:38:54 -0500 received badge  Famous Question (source)
2018-07-24 01:13:35 -0500 received badge  Notable Question (source)
2018-07-23 11:47:34 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

Can you please mark this ticket as solved and close it please

I don't have the options to do so

Thanks

2018-07-23 11:46:47 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

I fixed my binding issue. Because I have 2 NIC cards, 1 for a normal Network and 1 dedicated for SR-IOV, I had to, on the compute:

/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [linux_bridge]
    physical_interface_mappings = provider:em4,physnet:p3p1

SR-IOV seems to work now

2018-07-20 10:26:09 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

Hi Andy

I did all of that (as described in my explanation) However I was using: https://docs.openstack.org/neutron/pike/admin/config-sriov.html (https://docs.openstack.org/neutron/pi...) as I'm using Pike (not Mikita)

Thanks

2018-07-20 10:10:25 -0500 received badge  Enthusiast
2018-07-19 23:11:35 -0500 received badge  Popular Question (source)
2018-07-19 18:08:44 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

By doing that, I'm telling OpenStack to use the path "/sys/bus/pci/devices/%s/physfn/net" instead of ""/sys/bus/pci/devices/%s/net""

But I'm sure that is NOT the right way to do (modifying the Python files), It must be a configuration somewhere...

Do you know what is configuration to do so?

2018-07-19 18:06:00 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

now, my errors "No net device was found for VF" have disappear

No net device was found for VF 0000:04:00.5: PciDeviceNotFoundById: PCI device 0000:04:00.5 not found
No net device was found for VF 0000:04:00.3: PciDeviceNotFoundById: PCI device 0000:04:00.3 not found
….
2018-07-19 18:06:00 -0500 received badge  Commentator
2018-07-19 18:04:51 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

and

Change
def get_ifname_by_pci_address(pci_addr, pf_interface=False):
by
def get_ifname_by_pci_address(pci_addr, pf_interface=True):
2018-07-19 18:04:10 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

Ok, I know why now: I have to change in /usr/lib/python2.7/site-packages/nova/pci/utils.py:

Change
def get_mac_by_pci_address(pci_addr, pf_interface=False):
by 
def get_mac_by_pci_address(pci_addr, pf_interface=True):
2018-07-19 15:26:48 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

I'm adding some LOG.warning traces in the python file /usr/lib/python2.7/site-packages/nova/pci/utils.py to understand why as it makes no sense to me. Hopefully I can get to the bottom of this issue

2018-07-19 10:59:57 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

The folder net is present in 0000:04:00.0

It seems this part is well configured and have all the bit and pieces required

Any other idea/tip please?

2018-07-19 10:53:22 -0500 commented answer Pike - SR-IOV instance - not enough hosts available

lrwxrwxrwx. 1 root root 0 Jul 18 16:05 0000:04:00.7 -> ../../../devices/pci0000:00/0000:00:02.0/0000:04:00.7