yby-developer's profile - activity

2018-07-03 15:36:39 -0500 received badge  Famous Question (source)
2018-07-03 15:36:39 -0500 received badge  Notable Question (source)
2017-01-30 04:36:14 -0500 received badge  Famous Question (source)
2017-01-30 04:36:14 -0500 received badge  Notable Question (source)
2016-06-16 07:58:19 -0500 received badge  Famous Question (source)
2016-05-23 07:05:06 -0500 received badge  Famous Question (source)
2016-05-13 03:34:04 -0500 received badge  Famous Question (source)
2016-05-13 03:34:04 -0500 received badge  Notable Question (source)
2016-05-09 22:07:55 -0500 commented answer how does nova decide the VM nic type?

Thanks, this makes sense.

2016-05-09 22:05:28 -0500 received badge  Famous Question (source)
2016-05-09 12:25:06 -0500 received badge  Popular Question (source)
2016-05-09 12:25:06 -0500 received badge  Famous Question (source)
2016-05-09 12:25:06 -0500 received badge  Notable Question (source)
2016-05-05 13:27:03 -0500 received badge  Popular Question (source)
2016-05-04 15:23:14 -0500 asked a question failed loading vm with sr-iov

Hi,

I'm following this guideline: https://wiki.openstack.org/wiki/SR-IO...

I put only sriovnicswitch in the ml2 mechanism driver, as I only want 1 VM with 1 NIC which is sr-iov, just to see that I'm able to configure it.

I'm getting the following error in /var/log/nova/nova-scheduler.log:

2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher [req-7fe06f93-51a6-4ce3-9bb9-7dd09204f996 02f337c4fce9439990680032088a5f54 8fd9dd2e15e7474f9a98e338e8168c14 - - -] Exceptio n during message handling: PCI device request ({'requests': [InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=305b0736-9882-4942-93cb-28eab8fafea9,spec=[{physical_network= None}])], 'code': 500}equests)s failed 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_a nd_reply 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher executor_callback)) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher executor_callback) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatc h 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, *new_args) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher return func(args, **kwargs) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in select_destinations 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher filter_properties) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 67, in select_des tinations 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher filter_properties) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 163, in _schedule 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher chosen_host.obj.consume_from_instance(instance_properties) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/scheduler/host_manager.py", line 284, in consume_from_ instance 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher self.pci_stats.apply_requests(pci_requests, instance_cells) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/pci/stats.py", line 234, in apply_requests 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher raise exception.PciDeviceRequestFailed(requests=requests) 2016-05-04 20:19:11.854 41452 ERROR oslo_messaging.rpc.dispatcher PciDeviceRequestFailed: PCI device request ({'requests': [InstancePCIRequest(alias_name=None,count ...

(more)
2016-04-20 00:37:12 -0500 received badge  Notable Question (source)
2016-04-19 06:43:57 -0500 received badge  Popular Question (source)
2016-04-19 06:43:57 -0500 received badge  Notable Question (source)
2016-04-19 06:29:38 -0500 received badge  Popular Question (source)
2016-04-12 05:32:52 -0500 received badge  Notable Question (source)
2016-04-12 05:32:52 -0500 received badge  Popular Question (source)
2016-03-22 01:46:14 -0500 received badge  Famous Question (source)
2016-03-14 12:03:38 -0500 received badge  Popular Question (source)
2016-03-10 14:05:25 -0500 received badge  Famous Question (source)
2016-03-08 01:03:35 -0500 received badge  Popular Question (source)
2016-03-07 22:40:05 -0500 asked a question Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology

I think this may be a bug, as my host has all the numa/hugepages properties required for this flavor as far as I know:

flavor:

extra_specs:{"hw:cpu_policy": "dedicated", "hw:mem_page_size": "large", "aggregate_instance_extra_specs:pinned": "true", "hw:numa_node.0": "0", "hw:numa_nodes": "1", "hw:cpu_model": "Haswell"}

ram: 512

host:

virsh capabilities:

<topology>
  <cells num='2'>
    <cell id='0'>
      <memory unit='KiB'>65855052</memory>
      <pages unit='KiB' size='4'>16463763</pages>
      <pages unit='KiB' size='2048'>4096</pages>
      <distances>
        <sibling id='0' value='10'/>
        <sibling id='1' value='21'/>
      </distances>
      <cpus num='24'>

vm hugepages:

$:~/libvirt-1.2.12# cat /proc/sys/vm/nr_hugepages
8192

memory:

$:~/libvirt-1.2.12# grep -i huge /proc/meminfo
AnonHugePages:    528384 kB
HugePages_Total:    8192
HugePages_Free:     7680
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

mount:

$:~/libvirt-1.2.12# mount|grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
nodev on /mnt/huge type hugetlbfs (rw,relatime,uid=107,gid=113)

nova:

$# nova --version 2.23.3

This is openstack kilo on Debian 8.

Am I missing anything?

2016-03-07 20:27:49 -0500 answered a question NUMA node binding are not supported by this QEMU

This was resolved. The Debian release that I had came with qemu-system-x86 with numa disabled (CONFIG_NUMA). I was able to install a newer qemu-system-x86 where the error disappeared.

2016-03-07 11:41:11 -0500 received badge  Famous Question (source)
2016-03-07 02:48:23 -0500 received badge  Self-Learner (source)
2016-03-07 02:48:23 -0500 received badge  Teacher (source)
2016-03-06 18:55:39 -0500 answered a question how does ml2 work specifically?

Thanks, but I still don't undestand the answers to the questions that I asked. I have already read those posts, and many others. My question is simply to understand the flow of who against whom and where. So I will phrase it as questions:

Let's say I set up a simple openstack with 1 network node and 2 computes. I understand that the neutron service (API) is running on the network node. Now, say I add a simple tenant network, with some subnet. Neutron service (API) is notified that I would like to create such network, and Neutron talks to ML2 plugin. 1. Does ML2 talk directly to the mechanism driver? 2. Does that mean that the mechanism driver code is only relevant for the network node? 3. Who calls the neutron agent on the network node and on the compute node? Neutron service? Neutron ML2 plugin? Neutron ML2 mechanism driver? 4. Is the whole purpose of the agent to set rules in the vSwitch and set up ports locally? 5. Does the neutron agent ever initiate something, or only does as it is told? 6. Does it always act locally (i.e. changes things only on the server it's running on)?

2016-03-04 12:54:53 -0500 asked a question NUMA node binding are not supported by this QEMU

Running openstack kilo. Lunching a VM on a flavor with huge pages. Getting:

qemu-system-x86_64: -object memory-backend-ram,size=1024M,id=ram-node0,host-nodes=0,policy=bind: NUMA node binding are not supported by this QEMU

qemu is 2.3.0. Does qemu on Debian not support numa binding?

2016-03-03 09:29:52 -0500 asked a question how does nova decide the VM nic type?

I have a running compute with ovs-dpdk. There is no ovs neutron agent, and the ovs is managed by odl on a controller node. The bridges ar br-int and br-ex, datapath type is netdev.

My question is, how does nova decide on nic type (i.e. virtio or vhostuser) when a VM is created (the call to qemu-system-x86_64)?

Specifically, can I instruct nova to choose vhostuser, or is it done automatically when the ovs bridge datapath type is netdev?

2016-03-03 06:26:02 -0500 received badge  Notable Question (source)
2016-03-02 13:10:54 -0500 answered a question ovs (with --dpdk) doesn't forward packets to vxlan tunnels in an openstack environment

problem solved: had to set datapath_type from netdev to system, since eth1 wasn't actually a dpdk interface:

ovs-vsctl set Bridge br-int datapath_type=system
2016-03-02 12:41:47 -0500 asked a question ovs (with --dpdk) doesn't forward packets to vxlan tunnels in an openstack environment

I have a compute host with Debian 8, OVS 2.4.90, compiled for dpdk 2.0.0 and running with dpdk support (--dpdk), in an openstack environment with ODL on the controller as the OVS manager (so no neutron agent). A VM is created on the same compute I mentioned. The VM tries to obtain an IP from dhcp. It sends BOOTP packets.

Those packets are matched against:

cookie=0x0, duration=575.331s, table=110, n_packets=84, n_bytes=16056, priority=16383,reg0=0x1,tun_id=0xc9,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:8,output:4,output:1,output:5,output:6,output:7

This is how the bridge looks like:

OFPT_FEATURES_REPLY (OF1.3) (xid=0x2): dpid:000072b0e2b1ed4c
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS
OFPST_PORT_DESC reply (OF1.3) (xid=0x3):
 1(vxlan-10.53.6.1): addr:7e:f9:5f:c9:86:6a
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 3(patch-ext): addr:96:ee:48:d0:58:23
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 4(vxlan-10.53.6.1): addr:e2:f9:b2:7b:47:9f
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 5(vxlan-10.53.6.1): addr:8e:26:fc:b8:b9:cb
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 6(vxlan-10.53.6.1): addr:6a:69:b6:9a:47:56
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 7(vxlan-10.53.6.1): addr:de:ed:fc:3f:7c:c3
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 8(tap79cdb725-73): addr:fe:16:3e:b0:bc:10
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:72:b0:e2:b1:ed:4c
     config:     PORT_DOWN
     state:      LINK_DOWN
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (OF1.3) (xid=0x5): frags=normal miss_send_len=0

The local_ip configured on the bridge:

root@BASE-CCP-CPN-N0003-NETCLM:~# ovs-vsctl get Open_vSwitch . other_config:local_ip
"10.53.6.16"

The interface:

root@BASE-CCP-CPN-N0003-NETCLM:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr b0:5a:da:02:10:20
          inet addr:10.53.6.16  Bcast:10.53.6.255  Mask:255.255.255.0
          inet6 addr: fe80::b25a:daff:fe02:1020/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5037083366 errors:0 dropped:100 overruns:0 frame:0
          TX packets:5068594045 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:835697551734 (778.3 GiB)  TX bytes:886055918384 (825.2 GiB)

So I expect the packet to be flooded to all vxlan tunnels, ports 1 and 4-7 (and seems like it's also supposed to output it back to port 8 for some reason). However, when I tcpdump -i eth1, I don't see the BOOTP packets vxlan-encapsulated (or even not encapsulated for that matter).

A similar second compute, running OVS 2.3 ... (more)

2016-02-25 23:02:03 -0500 received badge  Popular Question (source)
2016-02-24 15:47:47 -0500 asked a question how does ml2 work specifically?

I'm trying to understand how the ml2 works. I'm kinda new to openstack. I was reading/watching these:

https://www.youtube.com/watch?v=whmcQ... https://www.youtube.com/watch?v=37CTb... https://www.youtube.com/watch?v=0APNb... https://www.google.com/url?sa=t&rct=j... https://wiki.openstack.org/wiki/Neutr...

Let's say we have 1 Network node, and 2 compute nodes. compute-1 node is running OVS and compute-2 linuxbridge.

From what I understand:

User instantiates a VM.
Nova finds the right compute host, let\'s say the one with the OVS (compute-1), and instructs Neutron-server accordingly.
Now, from what I understand, Neutron server has an ML2 plugin.
This plugin has to instruct the Neutron agent running on the compute host.
The Neutron agent is the one that generates the openflow/ovsdb rules on the compute host.
So the ML2 plugin on the Neutron-server is using a Mechanism driver. If it communicates with compute-1, it uses the openvswitch MechDriver.

Does this mean that the MechDriver is only relevant on a network node? The reason I ask is that I think I saw it was used on the compute host as well in the conf file. Or I did not understand anything, and the Mechdriver is relevant to the Neutron agent and not to the server??

2016-02-24 14:19:35 -0500 answered a question error while initializing tempest: [Errno 2] No such file or directory: '/usr/etc/tempest'

I guess the new tempest doesn't come with conf file anymore, need to be self created

2016-02-19 18:36:18 -0500 received badge  Enthusiast
2016-02-18 19:09:12 -0500 received badge  Notable Question (source)
2016-02-18 15:51:47 -0500 marked best answer Why do I get "NovaException: Unexpected vif_type=binding_failed" when setting up devstack Kilo with networking-ovs-dpdk and vxlan?

I managed to load it on latest devstack (Mitaka). But when try the same thing on Kilo, whenever I instantiate a VM, I keep getting stuck on the error: NovaException: Unexpected vif_type=binding_failed. I've tried all answers online, and wasn't able to make this work.

This is the error on the controller:

2016-02-10 04:54:48.898 ERROR neutron.plugins.ml2.managers [req-78165d6e-1a3d-4083-a7c4-e55611b9597e neutron service] Failed to bind port 6533524a-2dac-42cc-ab7b-fd6932bf79e4 on host devsta
ck1-2
2016-02-10 04:54:48.899 ERROR neutron.plugins.ml2.managers [req-78165d6e-1a3d-4083-a7c4-e55611b9597e neutron service] Failed to bind port 6533524a-2dac-42cc-ab7b-fd6932bf79e4 on host devstack1-2

This is the error on the compute:

2016-02-10 04:54:52.486 ERROR nova.compute.manager [req-0df362c2-e3a1-4636-83df-ee31baef840d admin demo] [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd] Instance failed to spawn
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd] Traceback (most recent call last):
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/compute/manager.py", line 2473, in _build_resources
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     yield resources
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/compute/manager.py", line 2345, in _build_and_run_inae
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     block_device_info=block_device_info)
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2404, in spawn
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     write_to_disk=True)
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4205, in _get_guest_xml
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     context)
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4076, in _get_guest_coig
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     flavor, virt_type)
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]   File "/opt/stack/nova/nova/virt/libvirt/vif.py", line 374, in get_config
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd]     _("Unexpected vif_type=%s") % vif_type)
2016-02-10 04:54:52.486 27939 TRACE nova.compute.manager [instance: 503365c6-7c07-4a83-bad0-f45c3f3e5dbd] NovaException: Unexpected vif_type=binding_failed

No other error in any other log.

My setup is:

1 controller+network node, Centos 7, devstack Kilo. No networking-ovs-dpdk.
2 compute nodes, Ubunto 14.04, devstack Kilo, networking-ovs-dpdk Kilo.
private network is vxlan.

Here is the relevant local.conf from the compute host:

disable_all_services

# Disable Nova-Network
disable_service n-net

enable_service n-cpu
enable_service rabbit
enable_service q-agt
enable_service n-novnc

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=ovsdpdk
Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,vxlan
Q_USE_SECGROUP=True
Q_L3_ENABLED=False
Q_USE_PROVIDER_NETWORKING=False
Q_ML2_TENANT_NETWORK_TYPE=vxlan
ENABLE_TENANT_VLANS=False
ENABLE_TENANT_TUNNELS=True
OVS_ENABLE_SG_FIREWALL_MULTICAST=False
OVS_PHYSICAL_BRIDGE=br-p1p1

enable_plugin networking-ovs-dpdk http://git.openstack.org/openstack/networking-ovs-dpdk stable/kilo

OVS_DPDK_INSTALL=True
SKIP_OVS_INSTALL=True
OVS_DATAPATH_TYPE=netdev
OVS_DPDK_RTE_LIBRTE_VHOST=y
OVS_DPDK_PORT_MAPPINGS=p1p1:br-p1p1
DPDK_PHYSICAL_NETWORK=dpdk-physical-net
OVS_BRIDGE_MAPPINGS+=,dpdk-physical-net:br-p1p1
OVS_NUM_HUGEPAGES=4096
OVS_SOCKET_MEM=1024 ...
(more)