Ask Your Question

shahram's profile - activity

2019-02-13 20:06:50 -0500 received badge  Notable Question (source)
2019-02-13 20:06:50 -0500 received badge  Famous Question (source)
2018-03-30 07:34:08 -0500 received badge  Student (source)
2017-03-07 05:09:56 -0500 received badge  Taxonomist
2017-01-04 07:19:19 -0500 received badge  Famous Question (source)
2016-11-21 12:26:15 -0500 received badge  Notable Question (source)
2016-11-21 12:26:15 -0500 received badge  Famous Question (source)
2016-11-21 12:25:34 -0500 received badge  Popular Question (source)
2016-11-18 16:36:06 -0500 asked a question DHCP replies are dropped at the subinterface

My instance can not get an ip on a vlan provider network and I am not quite sure why. My iptables look ok for the vm tap, I see the reply from the dhcp server at the main interface level but I do not see it get passed to the sub-interface. Any additional troubleshooting guide will be greatly appreciated.

Thank you in advance.

here is the brctl output:

bridge name bridge id       STP enabled interfaces
brq855afcdd-fc      8000.d4ae52e8ae0c   no      em3.40
                            tap838dab94-47

here is the iptables List

  eneutron-linuxbri-sg-chain  all  --  anywhere             anywhere             PHYSDEV match --physdev-out tap838dab94-47 --physdev-is-bridged /* Direct traffic from the VM interface to the security group chain. */
neutron-linuxbri-sg-chain  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tap838dab94-47 --physdev-is-bridged /* Direct traffic from the VM interface to the security group chain. */
neutron-linuxbri-o838dab94-4  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tap838dab94-47 --physdev-is-bridged /* Direct incoming traffic from VM to the security group chain. */
Chain neutron-linuxbri-i838dab94-4 (1 references)
Chain neutron-linuxbri-o838dab94-4 (2 references)
neutron-linuxbri-s838dab94-4  all  --  anywhere             anywhere            
Chain neutron-linuxbri-s838dab94-4 (1 references)
neutron-linuxbri-i838dab94-4  all  --  anywhere             anywhere             PHYSDEV match --physdev-out tap838dab94-47 --physdev-is-bridged /* Jump to the VM specific chain. */
neutron-linuxbri-o838dab94-4  all  --  anywhere             anywhere             PHYSDEV match --physdev-in tap838dab94-47 --physdev-is-bridged /* Jump to the VM specific chain. */
nter code here

here is the traffic at main interface em3:

enter code here:
17:31:45.906424 fa:16:3e:0c:5b:16 > Broadcast, ethertype 802.1Q (0x8100), length 346: vlan 40, p 0, ethertype IPv4, 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:0c:5b:16, length 300
17:31:45.906768 fa:16:3e:ea:6d:b0 > fa:16:3e:0c:5b:16, ethertype IPv4 (0x0800), length 371: 10.4.0.3.bootps > 10.4.0.4.bootpc: BOOTP/DHCP, Reply, length 329
17:31:45.906830 fa:16:3e:12:e0:3b > fa:16:3e:0c:5b:16, ethertype IPv4 (0x0800), length 371: 10.4.0.1.bootps > 10.4.0.4.bootpc: BOOTP/DHCP, Reply, length 329
17:31:45.906857 fa:16:3e:71:2e:fd > fa:16:3e:0c:5b:16, ethertype IPv4 (0x0800), length 371: 10.4.0.2.bootps > 10.4.0.4.bootpc: BOOTP/DHCP, Reply, length 329

here is the traffic at the subinterface em3.40

17:32:02.556582 fa:16:3e:0c:5b:16 > Broadcast, ethertype IPv4 (0x0800), length 342: 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:0c:5b:16, length 300
2016-10-17 17:46:56 -0500 answered a question ERROR nova.compute.manager BadRequest: Expecting to find domain in user - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)

user_domain_name = default was missed in the [neutron] section of nova.conf. After adding the missing entry I am able to launch an instance.

2016-10-17 12:23:58 -0500 received badge  Popular Question (source)
2016-10-14 17:17:01 -0500 asked a question ERROR nova.compute.manager BadRequest: Expecting to find domain in user - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)
  1. Instance gets created but fails to start, basically the openstack server create returns as normal but openstack-nova-compute complains about "Expecting to find domain in user" This is a fresh Matika install based on the documentations provided for Centos7. Every verification throughout the installation passed with no errors. However openstack-nova-compute throws an error as below. This happens both through Horizon and command line. The final error is ERROR nova.compute.manager BadRequest: Expecting to find domain in user - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-1bce86ba-a4c6-4d8a-889f-1585b0054757)

    the environment variables for command line are

    export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=xxxxxx export OS_AUTH_URL=http://controller:35357/v3 export OS_IMAGE_API_VERSION=2 export OS_IDENTITY_API_VERSION=3

I did find some reference to a bug with Version 3 authentication that required OS_PROJECT_DOMAIN_ID and OS_USER_DOMAIN_ID but including those had no impact on my request. Any help is appreciated.

 016-10-14 18:01:06.899 1794 INFO nova.compute.claims [req-bea8438f-4de9-47b6-8578-11b8d839b66a 1d8d9016877247b1830ccd600bd2483b c49daf99a8de4dba945214fa5b11e7ff - - -] [instance: af51d2ce-a9ec-4e86-ac02-4f4642e54120] Claim successful
2016-10-14 18:01:07.773 1794 WARNING nova.virt.osinfo [req-bea8438f-4de9-47b6-8578-11b8d839b66a 1d8d9016877247b1830ccd600bd2483b c49daf99a8de4dba945214fa5b11e7ff - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2016-10-14 18:01:07.910 1794 WARNING nova.virt.osinfo [req-bea8438f-4de9-47b6-8578-11b8d839b66a 1d8d9016877247b1830ccd600bd2483b c49daf99a8de4dba945214fa5b11e7ff - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager [req-bea8438f-4de9-47b6-8578-11b8d839b66a 1d8d9016877247b1830ccd600bd2483b c49daf99a8de4dba945214fa5b11e7ff - - -] Instance failed network setup after 1 attempt(s)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager Traceback (most recent call last):
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, in _allocate_network_async
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager     bind_host_id=bind_host_id)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 668, in allocate_for_instance
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager     self._delete_ports(neutron, instance, created_port_ids)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager     self.force_reraise()
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager     six.reraise(self.type_, self.value, self.tb)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 657, in allocate_for_instance
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager     security_group_ids, available_macs, dhcp_opts)
2016-10-14 18:01:08.106 1794 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 254, in _create_port
2016-10-14 18:01:08.106 1794 ERROR nova ...
(more)
2016-10-06 11:54:49 -0500 received badge  Supporter (source)
2016-09-29 17:32:48 -0500 received badge  Enthusiast
2016-08-31 11:37:51 -0500 received badge  Notable Question (source)
2016-08-31 09:33:14 -0500 received badge  Popular Question (source)
2016-08-30 15:11:13 -0500 asked a question Kilo/CentOs7 Snapshot creation fails with a glanceglient exception on the nova compute node

I am unable to create snapshots of instances on select nova compute nodes. I receive an error in the nova compute log on the hypervisor complaining about an attribute error. It appears that the galanceclient is having trouble handling the message. Looking on line I noticed a bug that was reported related to the urllib3 version and the glanceclient, please see https://bugs.launchpad.net/glance0/+bug/1476770 (https://bugs.launchpad.net/glance0/+b...).

I downgraded the urllib3 to version 1.10.2 and my python-glanceclient is 0.17.0 but the exception remains. I have another hypervisor that was built a few months earlier with the same versions that creates snapshots with no exception. Any help would be greatly appreciated. I have attached the error below.

Thank you in advance,

2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6933, in snapshot_instance
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     return self.manager.snapshot_instance(ctxt, image_id, instance)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     payload)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 333, in decorated_function
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     LOG.warning(msg, e, instance_uuid=instance_uuid)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher ...
(more)
2015-12-08 20:29:35 -0500 received badge  Famous Question (source)
2015-11-20 12:57:10 -0500 answered a question binding:VIF_Type : binding_failed

My problem was a bad a Tunnel network configuration. ml2_gre_endpoints had an old compute node that now acts as a backup controller listed on there with a wrong interface, management ip instead of tunnel ip. I deleted the record in the database from the neutron/ml2_gre_endpoints database/table. checked with ovs-vsctl to make sure that br-tun looks in order for my setup and launched a node successfully. Thank you all for your help.

2015-11-20 12:47:07 -0500 commented question binding:VIF_Type : binding_failed

Thank smca. I managed to launch a node today. The problem was a bad tunnel network configuration! Your last posting was very helpful.

2015-11-19 14:06:46 -0500 received badge  Notable Question (source)
2015-11-19 11:42:19 -0500 commented answer binding:VIF_Type : binding_failed

Thank you. My router has two ports , one public and the other private. I have verified that both ports are ACTIVE.

2015-11-19 11:10:29 -0500 received badge  Editor (source)
2015-11-19 11:07:40 -0500 commented question binding:VIF_Type : binding_failed

Thanks for the feedback. I codified the logs in the post. The Schedulers dont have any errors, the conductor has an error just saying the same thing as the nova-compute log.

2015-11-19 08:16:03 -0500 received badge  Popular Question (source)
2015-11-18 20:29:08 -0500 asked a question binding:VIF_Type : binding_failed

I have a multi-node openstack installation. This is a Highly Available installation, multiple controller nodes and multiple Neutron Network nodes using VRRP (L3HA) with Open vSwitch and ML2 plug-in. All my neutron agents and nova endpoints seem to be in order and up. Launching of an instance keeps failing on my compute node, I have shut all down except one, and I cant find what causes the error, the port binding failure. I have the nova compute in debug mode and have captured the logs. I am including the section only around the instance launch time. I am hoping someone here can point me to the right direction. I have been struggling with this for over a week and have read and checked all postings that were related but none of their possible solutions were applicable to me.

My schedulers show no error. My conductor has an error just saying that the instance can not be created because of unexpected vif-type and it is to be rescheduled. My neutron openvswitch agents on my network nodes have no errors at the time of the launch and they just say the security member got updated. there are some INFO messages about ports not being present on br-int but they appear an hour before I had even launched the instance. My neutron openvswitch agent on the Compute node has similar info as the network nodes

I have tried both kvm and qemu as hypervisor with the same result.

My nova-compute log in Debug mode is attached below. I tried to highlight the errors. everything seems to be in order except when the assigned dhcp ip tries to bind to the vif and I end up with a port status down. It appears to me that the port is marked as down because of the vif bonding failure but I am new to openstack and dont understand the order of events very well.

Thanks in Advance

Highlighted Error

 107 2015-11-18 15:16:05.391 4330 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://controller:9696/v2.0/ports.json?tenant_id=49a02b7bf2bb41d0996e5e2675b03d8b&device_id=9cecb6f2-cbfd-4c2c-9b        bf-9fcf987cb197 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}e2a397d58952037a7b95efe0dd7ea435d6cd6326" _http_log_request /usr/lib/python2.7/site-packag        es/keystoneclient/session.py:195
    108 2015-11-18 15:16:05.439 4330 DEBUG keystoneclient.session [-] RESP: [200] date: Wed, 18 Nov 2015 20:16:05 GMT connection: keep-alive content-type: application/json; charset=UTF-8 content-length: 706         x-openstack-request-id: req-7324c594-b3bc-42f6-9dea-d88fe8957241
    109 
    110 RESP BODY: {"ports": [{"status": "DOWN", "binding:host_id": "compute2.nueos", "allowed_address_pairs": [], "extra_dhcp_opts": [], "device_owner": "compute:nova", "binding:profile": {}, "fixed_ips":         [{"subnet_id": "4e98d6cb-5154-4ebc-90d6-0a059e5e7306", "ip_address": "192.168.1.18"}], "id": "1f864632-fc6f-41e9-a951-971bd4c5820e", "security_groups": ["2c396c40-0a07-4112-9ffe-38699fd1a3fb"], "dev        ice_id": "9cecb6f2-cbfd-4c2c-9bbf-9fcf987cb197", "name": "", "admin_state_up": true, "network_id": "958d7c44-66ba-4ab0-8061-ede751a6b8f9", "tenant_id": "49a02b7bf2bb41d0996e5e2675b03d8b", "binding:v        if_details": {}, "binding:vnic_type": "normal", "binding:vif_type": "binding_failed", "mac_address": "fa:16:3e:59:71:e1"}]}
    111  _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:224
    112 

135 2015-11-18 15:16:07.375 4330 DEBUG nova.virt.libvirt.driver [req-35c9a4d6-3649-4eed-8f0d-7397c9357379 - - - - -] [instance: 9cecb6f2-cbfd-4c2c-9bbf-9fcf987cb197] Start _get_guest_xml network_info=[V        IF({'profile': {}, 'ovs_interfaceid': None, 'preserve_on_delete': False, 'network ...
(more)