发起问题

hudac 的档案 - activity

2020-09-03 03:40:50 -0500 获得奖牌  受欢迎的问题 (source)
2020-09-03 03:40:50 -0500 获得奖牌  著名问题 (source)
2020-09-03 03:40:50 -0500 获得奖牌  热门的问题 (source)
2020-02-08 12:12:46 -0500 获得奖牌  著名问题 (source)
2019-11-27 11:54:49 -0500 获得奖牌  著名问题 (source)
2019-11-27 11:54:49 -0500 获得奖牌  热门的问题 (source)
2019-11-19 17:22:00 -0500 获得奖牌  著名问题 (source)
2019-11-05 07:30:57 -0500 问了问题 How does zun works with glance and nova

I've installed zun into my openstack environment.
General question: how does zun works with glance and nova ?

More specific questions:

  1. openstack appcontainer image list includes all docker container images. Can / should they be seen also in openstack image list ?
  2. When starting an instance from an image of openstack image list, if this image's format is raw, and container format is docker (--container-format docker --disk-format raw). Does nova knows it should use zun to start the container, and not start it by itself?
    2.2. Is there a way to tell nova to run these kind of images with zun ?

Thank you.

2019-11-05 07:20:04 -0500 回答问题 kuryr-libnetwork installation fails

I've changed auth_url to

auth_url = http://controller:5000/v3

Now it somehow works, though I'm not sure that's the appropriate solution.

2019-11-05 06:23:01 -0500 获得奖牌  受欢迎的问题 (source)
2019-10-31 15:06:29 -0500 问了问题 kuryr-libnetwork installation fails

I'm trying to install https://docs.openstack.org/zun/rocky/install/compute-install-ubuntu.html (zun) in order to be able to run docker containers in openstack.
A prerequisite of that is to install https://docs.openstack.org/kuryr-libnetwork/rocky/install/controller-install.html (kuryr-libnetwork).
I followed the quite simple installations instructions in controller and compute nodes, but after all, kuryr isn't installed properly. Issuing systemctl status kuryr-libnetwork.service gives the error:

# systemctl status kuryr-libnetwork.service --no-pager -l
● kuryr-libnetwork.service - Kuryr-libnetwork - Docker network plugin for Neutron
   Loaded: loaded (/etc/systemd/system/kuryr-libnetwork.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2019-10-31 14:41:45 UTC; 5h 20min ago
  Process: 17213 ExecStart=/usr/local/bin/kuryr-server --config-file /etc/kuryr/kuryr.conf (code=exited, status=1/FAILURE)
 Main PID: 17213 (code=exited, status=1/FAILURE)

Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr   File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr     self.auth_ref = self.get_auth_ref(session)
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr   File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 206, in get_auth_ref
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr     self._plugin = self._do_create_plugin(session)
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr   File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 161, in _do_create_plugin
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr     'auth_url is correct. %s' % e)
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to http://controller:35357: HTTPConnectionPool(host='controller', port=35357): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f65331b4a50>: Failed to establish a new connection: [Errno 111] Connection refused',))
Oct 31 14:41:45 openstack kuryr-server[17213]: 2019-10-31 14:41:45.427 17213 ERROR kuryr 
Oct 31 14:41:45 openstack systemd[1]: kuryr-libnetwork.service: Main process exited, code=exited, status=1/FAILURE
Oct 31 14:41:45 openstack systemd[1]: kuryr-libnetwork.service: Failed with result 'exit-code'.

The most suspicious line is DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate because in kuryr installation on controller node, there's no something like openstack endpoint create.

/etc/kuryr/kuryr.conf has this part as well:

[neutron]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
...

Please help

2019-05-07 02:08:19 -0500 获得奖牌  热门的问题 (source)
2019-05-07 02:08:19 -0500 获得奖牌  受欢迎的问题 (source)
2019-04-29 14:47:25 -0500 获得奖牌  受欢迎的问题 (source)
2019-04-29 14:47:25 -0500 获得奖牌  热门的问题 (source)
2019-03-05 05:47:12 -0500 回答问题 Openstack vhostuser support

I was able to add vhostuserclientport directly to a VM.

  1. There was a problem with apparmor, I had to give it rw access to the place I wanted to create the vhost socket.
    For example, this is how I did it. I'm sure there's a better way:
    Add these two lines to /etc/apparmor.d/libvirt/libvirt-f80091e1-f0d1-412b-a58a-e1c25be3d4f8.files
    "/run/" rwk,
    "/var/run/
    " rwk,
    Then,
    apparmor_parser -r /etc/apparmor.d/libvirt/libvirt-f80091e1-f0d1-412b-a58a-e1c25be3d4f8
  2. Create the xml

        <interface type='vhostuser'>
          <mac address='00:00:00:00:00:01'/>
          <source type='unix' path='/var/run/dpdkvhostuserclient0' mode='server'/>
          <model type='virtio'/>
          <driver queues='2'/>
        </interface>
    
  3. Attach it to the vm

    virsh attach-device --domain instance-00000084 --live --config --file vhostuserclient0.xml

BTW. /var/run/dpdkvhostuserclient0 is from creating the port (and first the bridge)

ovs-vsctl add-br br0_dpdk -- set bridge br0_dpdk datapath_type=netdev
ovs-vsctl add-port br0_dpdk vhostuserclient0 -- set Interface vhostuserclient0 type=dpdkvhostuserclient options:vhost-server-path=/var/run/dpdkvhostuserclient0 ofport_request=1

BUT STILL!
1. I don't understand how I get ovs-dpdk implementation for "free". Normal traffic between the vms stay low
2. This is using ovs-vsctl - how do I do that in the formal openstack way?

2019-03-04 06:35:15 -0500 问了问题 Openstack vhostuser support

I want to add ovs-dpdk vhostuser support for openstack installation.
I have one controller node and one compute node.

According to https://docs.openstack.org/neutron/rocky/admin/config-ovs-dpdk.html (this site), After installing and configuring ovs-dpdk, the way for adding a vhostuser vnic is:

Create and add vhost-user network interfaces to instances in the same fashion as conventional interfaces.

What way is this? I don't understand this line.
How can I create a vhostuser network / vnic openstack-compatible ?

Eventually I have a port vhostuserclient1:

# ovs-vsctl show
fa3bcc40-8636-4f9a-b3c5-410ddbeae3e1
    Bridge "br0_dpdk"
        Port "vhostuserclient1"
            Interface "vhostuserclient1"
                type: dpdkvhostuserclient
                options: {n_rxq="2", vhost-server-path="/var/run/dpdkvhostuserclient1"}
                ...

How can I attach it specifically to an instance, with openstack compatible commands? Or at least virsh commands.

Thanks.

2019-02-26 00:27:07 -0500 获得奖牌  粉丝
2019-02-24 12:13:23 -0500 问了问题 ovs performance between two vms on same host

I have 2 vms over the same host (compute).
Both vms have these vNICs attached:

  • pci-passthrough 10Gbps (vNIC1)
  • virtio (ovs) (vNIC2)

The traffic flows as follows:

Traffic generator -> pNIC -> [vm1 vNIC1 (passthrough) -> fwd -> vNIC2] -> OVS -> [vm2 vNIC2]

Because vNIC1 is passthrough, I can get to 10Gbps when redirecting back to vNIC1. But redirecting through OVS puts traffic down to 150Mbps.
I'm aware this is not ovs-dpdk, but as I've seen http://www.openvswitch.org/support/ovscon2014/18/1600-ovs_perf.pptx (here) it should get up to 1.1Gbps for 64B packets.

This is my host (compute):

  • Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
  • 24 CPUs (HT)
  • VT-x enabled
  • 32GB ram
  • Isolated cpus

Both the vms have

  • 8 GB ram
  • 3 CPU pinning (actually only 1 works redirecting the traffic)

All sibling-cpus are idle. I mean, I treat it as non-HT machine.

I'm using openstack rocky.

# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.10.0

Is there anything I can do to boost native OVS performance? Doesn't 150Mbps looks a bit slow?

2019-01-29 01:50:34 -0500 获得奖牌  受欢迎的问题 (source)
2019-01-28 07:09:58 -0500 获得奖牌  组织者 (source)
2019-01-28 07:09:40 -0500 获得奖牌  编辑 (source)
2019-01-28 06:56:00 -0500 问了问题 pci passthrough grub intel_iommu=on causes no ip on guests

Hi, My setup is: 1 controller node, 1 compute node. I've installed openstack rocky step-by-step from the openstack guide.

After setting the grub option in order to provide pci-passthrough nic:

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

Guests stopped receiving ip via dhcp server.

I see this error on the compute node, nova-compute.log:

Received unexpected event network-vif-plugged-681e4bee-6fe1-47f0-83a8-be928b21ee77 for instance with vm_state active and task_state None

This is what I see in the guest:

[  OK  ] Started ifup for ens3.
         Starting Raise network interfaces...
[FAILED] Failed to start Raise network interfaces.
See 'systemctl status networking.service' for details.
[  OK  ] Reached target Network.
         Starting Initial cloud-init job (metadata service crawler)...
[   40.160381] cloud-init[827]: Cloud-init v. 18.2 running 'init' at Mon, 28 Jan 2019 12:38:28 +0000. Up 39.55 seconds.
[   40.163789] cloud-init[827]: ci-info: +++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++
[   40.167100] cloud-init[827]: ci-info: +--------+-------+------------------------------+-----------+-------+-------------------+
[   40.170596] cloud-init[827]: ci-info: | Device |   Up  |           Address            |    Mask   | Scope |     Hw-Address    |
[   40.173904] cloud-init[827]: ci-info: +--------+-------+------------------------------+-----------+-------+-------------------+
[   40.177804] cloud-init[827]: ci-info: |  ens3  |  True |              .               |     .     |   .   | fa:16:3e:56:d8:b2 |
[   40.182129] cloud-init[827]: ci-info: |  ens3  |  True | fe80::f816:3eff:fe56:d8b2/64 |     .     |  link | fa:16:3e:56:d8:b2 |
[   40.185976] cloud-init[827]: ci-info: |  ens4  | False |              .               |     .     |   .   | fa:16:3e:e4:59:1d |
[   40.190119] cloud-init[827]: ci-info: |   lo   |  True |          127.0.0.1           | 255.0.0.0 |   .   |         .         |
[   40.193699] cloud-init[827]: ci-info: |   lo   |  True |           ::1/128            |     .     |  host |         .         |
[   40.197368] cloud-init[827]: ci-info: +--------+-------+------------------------------+-----------+-------+-------------------+
  • I set the timeout of networking.service to 30 seconds, so I won't have to wait 5 minutes.
  • Afterwards, there's no ip on ens3, and no connectivity.

More details:

On compute node, nova-compute.log

INFO nova.compute.manager [req-ff000448-ae13-4d5a-b51b-1747d811a83c - - - - -] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] VM Resumed (Lifecycle Event)
INFO nova.virt.libvirt.driver [-] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] Instance running successfully.
INFO nova.virt.libvirt.driver [req-98bab187-154e-44df-b433-356c815a3e5a 1584e0d5ecea4f98a010c9ba0209f5a0 abbe58c749a546589f5322d99e69eff9 - default default] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] Instance soft rebooted successfully.
INFO nova.compute.manager [req-ff000448-ae13-4d5a-b51b-1747d811a83c - - - - -] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] During sync_power_state the instance has a pending task (reboot_started). Skip.
INFO nova.compute.manager [req-ff000448-ae13-4d5a-b51b-1747d811a83c - - - - -] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] VM Started (Lifecycle Event)
WARNING nova.compute.manager [req-1ec61c72-08e5-4081-867a-3bd28861fa68 c4c08da72442430a9ffb1508bfad551b 3bc27738ecf7487a9b7090ecac162d5e - default default] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] Received unexpected event network-vif-plugged-326b4b3b-f66a-418b-8d6d-b32280079956 for instance with vm_state active and task_state None.
WARNING nova.compute.manager [req-167abffc-3d25-476b-a312-506c1db1df09 c4c08da72442430a9ffb1508bfad551b 3bc27738ecf7487a9b7090ecac162d5e - default default] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] Received unexpected event network-vif-unplugged-681e4bee-6fe1-47f0-83a8-be928b21ee77 for instance with vm_state active and task_state None.
WARNING nova.compute.manager [req-2c2c2d1c-c34a-4bab-82fc-d82476c18fb8 c4c08da72442430a9ffb1508bfad551b 3bc27738ecf7487a9b7090ecac162d5e - default default] [instance: ab350c3d-0156-4408-bed0-eadf708c281e] Received unexpected event network-vif-plugged-681e4bee-6fe1-47f0-83a8-be928b21ee77 for instance with vm_state active and task_state None.
INFO nova.compute.resource_tracker [req-60faa024-6c3d-4706-aa05-985f5efb8e7d - - - - -] Final resource view: name=openstack phys_ram=32154MB used_ram=17408MB phys_disk=273GB used_disk=21GB total_vcpus=12 used_vcpus=5 pci_stats=[PciDevicePool(count=1,numa_node=None,product_id='1572',tags={dev_type='type-PF'},vendor_id='8086')

root@openstack:~# openstack server show vm1
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | AUTO                                                     |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | openstack                                                |
| OS-EXT-SRV-ATTR:hypervisor_hostname | openstack                                                |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000002                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2019-01-27T12:36:06.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | internal=172.16.1.108; datapath=172.16.2.107             |
| config_drive                        |                                                          |
| created                             | 2019-01-27T12:34:18Z                                     |
| flavor                              | vm_flavor (b6a83b01-7120-4047-9994-aa6f02c8d4bd ...
(more)
2018-11-26 11:37:50 -0500 问了问题 Can't spawn instances due to no ram

I was able to install openstack on centos using packstack (this https://www.tecmint.com/openstack-installation-guide-rhel-centos/ (manual)) (I have one machine that run all openstack modules - hypervisor and compute).

I was able to spawn ubuntu16.04 cloud image once, with flavor m1.small.
I had to delete it and create a new instance because of wrong key pair.
Since then, I couldn't spawn any instance.

I was able to see in nova-compute.log the next error message:

qemu-kvm: cannot set up guest memory 'pc.ram': Cannot allocate memory

free -h

          total        used        free      shared  buff/cache   available
Mem:            15G         14G        222M         54M        356M        120M
Swap:          6.8G        6.8G         20K

I don't understand where all the RAM went.

This is what top show (no one use that much RAM).

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                 
733 apache    20   0 1495388 253644   2876 S   5.6  1.6   3:02.59 httpd                                                                                                   
2129 mysql     20   0 9079668 250452   3616 S  27.8  1.5 211:05.47 mysqld                                                                                                  
734 apache    20   0 1371060 171732   2328 S   0.0  1.1   1:02.65 httpd                                                                                                   
23638 nova      20   0  513640 163888   1632 S   0.0  1.0   0:21.08 nova-api                                                                                                
23633 nova      20   0  512032 163560   2012 R  88.9  1.0  44:30.40 nova-api                                                  
...

I don't think that's the case of linux use the memory for io cache (as described https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/tuning_and_optimizing_red_hat_enterprise_linux_for_oracle_9i_and_10g_databases/chap-oracle_9i_and_10g_tuning_guide-memory_usage_and_page_cache (here)) because you can see the buff/cache is only 356M.

On the other hand, this is what openstack hypervisor show 1 -f json show:

...
  "memory_mb_used": 16016, 
  "uptime": "5 days,  3:05", 
  "host_time": "08:02:38", 
  "local_gb_used": 12, 
  "id": 1, 
  "current_workload": 0, 
  "state": "up", 
  "load_average": "5.01, 9.39, 6.09", 
  "users": "4", 
  "aggregates": [], 
  "host_ip": "192.168.201.209", 
  "hypervisor_hostname": "cloud.centos.lan", 
  "hypervisor_version": 2010000, 
  "disk_available_least": 27, 
  "local_gb": 40, 
  "free_ram_mb": 15861, 
  "vcpus_used": 0, 
  "hypervisor_type": "QEMU", 
  "memory_mb": 16373, 
  "vcpus": 16, 
  "running_vms": 0, 
  "service_id": 9, 
  "service_host": "cloud.centos.lan"
  • Is there something I can do in order to start instances?
  • What is taking all this RAM?
  • I tried flushing https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ (Linux io cache) without any help

Why there are so many processes that take all of my physical RAM ?

 ps -e -o pid,vsz,rss,comm= | sort -n -k 3  
 ...  
 7999 623252 96480 httpd  
 8019 623252 96648 httpd  
 8068 623252 96824 httpd  
 7920 623500 96960 httpd  
 8064 623508 96968 httpd  
 8023 621676 97256 httpd  
 8018 623764 97312 httpd  
 7932 624012 97404 httpd  
24668 418764 97524 nova-scheduler  
 7910 624268 97704 httpd  
 7905 624268 97780 httpd  
 7998 624020 97940 httpd  
 7924 624524 97964 httpd  
 7940 624532 98088 httpd  
 7930 624524 98908 httpd  
 7936 624788 99212 httpd  
23635 501564 99348 nova-api  
23625 498908 100724 nova-api  
 8360 623484 101504 httpd  
 8382 623484 101504 httpd  
 8392 623484 101504 httpd  
 8393 623228 101504 httpd  
 8428 623484 101508 httpd  
 8336 623484 101512 httpd  
 8369 623228 101512 httpd  
 8410 623484 101512 httpd  
 8414 623484 ...
(more)