发起问题

Herr-Herner 的档案 - activity

2019-08-06 16:33:47 -0500 获得奖牌  著名问题 (source)
2019-08-06 16:33:47 -0500 获得奖牌  受欢迎的问题 (source)
2019-08-06 16:33:47 -0500 获得奖牌  热门的问题 (source)
2017-08-17 21:53:46 -0500 获得奖牌  热门的问题 (source)
2017-08-17 21:53:46 -0500 获得奖牌  著名问题 (source)
2017-04-25 08:06:31 -0500 回答问题 Can't Ping Instances or SSH into Instances

You must explicitly enable promiscuous mode on the virtual switch to which your VMware VM is attached to, otherwise all incoming traffic gets blocked that is targeted to mac addresses unknown by the virtual switch. The virtual switch in VMware does not work like a real one. There is no forwarding table present.

Have a look here: https://kb.vmware.com/selfservice/sea...

2017-03-06 08:27:17 -0500 评论问题 [Libvirt] VM does not boot in Ocata (libvirt.xml missing, wrong owner for console.log)

I solved the issue by adding libvirt.hw_machine_type = "x86_64=pc-i440fx-xenial,i686=pc-i440fx-xenial" to my nova-compute.conf.

2017-03-06 06:25:47 -0500 获得奖牌  著名问题 (source)
2017-03-03 18:05:45 -0500 获得奖牌  热门的问题 (source)
2017-03-03 04:17:13 -0500 获得奖牌  受欢迎的问题 (source)
2017-03-03 03:48:08 -0500 获得奖牌  主编 (source)
2017-03-03 03:45:24 -0500 回答问题 [Libvirt] VM does not boot in Ocata (libvirt.xml missing, wrong owner for console.log)

There is another strange thing. I am on Ubuntu 16.04.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

When I create a VM a corresponding xml file gets created in /etc/libvirt/qemu/instance-[id].xml:

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit instance-0000003c
or other application using the libvirt API.
-->

<domain type='kvm'>
  <name>instance-0000003c</name>
  <uuid>7151c8b7-1ea5-4701-bb79-b482d9e253b8</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="15.0.0"/>
      <nova:name>test-vm01</nova:name>
      <nova:creationTime>2017-03-03 08:58:02</nova:creationTime>
      <nova:flavor name="mat.medium">
        <nova:memory>4096</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>1</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="17aafbe8a8754c2a8e4de9dc1f0643dc">admin</nova:user>
        <nova:project uuid="53eed6212bd54ff5acc4ad110e9899ac">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="6112eee2-4bbb-450d-b132-e8b0fdaf03f2"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1024</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>OpenStack Foundation</entry>
      <entry name='product'>OpenStack Nova</entry>
      <entry name='version'>15.0.0</entry>
      <entry name='serial'>3b3fc34f-f5f8-6258-950b-284758b6d5ce</entry>
      <entry name='uuid'>7151c8b7-1ea5-4701-bb79-b482d9e253b8</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-zesty'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>SandyBridge</model>
    <topology sockets='1' cores='1' threads='1'/>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='e5479084-e43e-4a1e-959a-b9989f02e632'/>
      </auth>
      <source protocol='rbd' name='vms/7151c8b7-1ea5-4701-bb79-b482d9e253b8_disk'>
        <host name='10.30.200.141' port='6789'/>
        <host name='10.30.200.142' port='6789'/>
        <host name='10.30.200.143' port='6789'/>
      </source>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <interface type='bridge'>
      <mac address='02:05:69:29:af:e1'/>
      <source bridge='qbr5663b508-00'/>
      <target dev='tap5663b508-00'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <log file='/var/lib/nova/instances/7151c8b7-1ea5-4701-bb79-b482d9e253b8/console.log' append='off'/>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <log file='/var/lib/nova/instances/7151c8b7-1ea5-4701-bb79-b482d9e253b8/console.log' append='off'/>
      <target type='serial ...
(more)
2017-03-02 13:09:35 -0500 问了问题 [Libvirt] VM does not boot in Ocata (libvirt.xml missing, wrong owner for console.log)

I am setting up OpenStack Ocata as HA deployment on Ubuntu 16.04. and I am confronted with a heavy blocker. I have set up Keystone, Glance, Nova, Neutron and Horizon. I am able to create and spawn a VM, but it does not boot.

The block device is created and available in my Ceph cluster. Libvirt gives me only:

Mar 2 17:09:11 os-compute01 virtlogd[3218]: End of file while reading data: Input/output error

The instances folder located under /var/lib/nova/instances contains a folder for the VM, but it contains only the empty file console.log and NO libvirt.xml. Another strange thing is that console.log is owned by root:root. Under Newton the file is owned by libvirt-qemu:kvm. The user libvirt-qemu is not able to write to that file.

I have tried to correct the ownership manaully and copied the VM specification file from "/etc/libvirt/qemu/instance-*.xml" to /var/lib/nova/instancs/[VM-UUID]/libvirt.xml. After rebooting the VM the console.log is owned by root:root again and the VM does not boot, maybe the file is just removed and recreated.

There seems to be problem in the current Ocata release of Nova.

Any hints how to workaround or fix the problem?

Thank you very much!

This is my nova.conf on os-compute01:

[DEFAULT]
compute_monitors = cpu.virt_driver
debug = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
host = os-compute01
instance_usage_audit = true
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
instances_path = $state_path/instances
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
log_dir = /var/log/nova
memcached_servers = os-memcache:11211
my_ip = 10.30.200.111
notification_driver = messagingv2
resume_guests_state_on_host_boot = true
state_path = /var/lib/nova
transport_url = rabbit://nova:SECRET@os-rabbit01:5672,nova:SECRET@os-rabbit02:5672/openstack
use_neutron = true

[api]
auth_strategy = keystone

[cache]
enabled = true
backend = oslo_cache.memcache_pool
memcache_servers = os-memcache:11211

[cinder]
catalog_info = volumev2:cinderv2:internalURL

[conductor]
use_local = false

[glance]
api_servers = http://os-image:9292

[keystone_authtoken]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
memcached_servers = os-memcache:11211
password = SECRET
project_domain_name = default
project_name = service
service_token_roles_required = true
user_domain_name = default
username = nova

[neutron]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
password = SECRET
project_domain_name = default
project_name = service
region_name = RegionOne
url = http://os-network:9696
user_domain_name = default
username = neutron

[oslo_concurrency]
lock_path = /var/lock/nova

[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_retry_backoff = 2
rabbit_retry_interval = 1

[placement]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
username = placement
password = SECRET
user_domain_name = default
project_name = service
project_domain_name = default
os_interface = internal
os_region_name = RegionOne

[vnc]
enabled = true
novncproxy_base_url = https://os-cloud.materna.com:6080/vnc_auto.html
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip

and my nova-compute.conf:

[DEFAULT]
compute_driver = libvirt.LibvirtDriver

[libvirt]
cpu_mode = custom
cpu_model = SandyBridge
disk_cachemodes="network=writeback"
hw_disk_discard = unmap
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = vms
images_type = rbd
inject_key = false
inject_partition = -2
inject_password = false
rbd_secret_uuid = SECRET
rbd_user = cinder
virt_type = kvm
2016-12-30 12:29:37 -0500 获得奖牌  著名问题 (source)
2016-12-30 12:29:37 -0500 获得奖牌  热门的问题 (source)
2016-12-30 12:29:37 -0500 获得奖牌  受欢迎的问题 (source)
2016-09-01 04:30:34 -0500 获得奖牌  著名问题 (source)
2016-08-18 06:49:43 -0500 获得奖牌  著名问题 (source)
2016-07-28 23:09:38 -0500 获得奖牌  著名问题 (source)
2016-07-26 23:45:11 -0500 获得奖牌  热门的问题 (source)
2016-07-12 10:10:13 -0500 获得奖牌  受欢迎的问题 (source)
2016-07-05 00:33:19 -0500 评论问题 Does DVR require promiscuous mode on compute nodes?

We have installed and configured OpenStack "Mitaka" for high availability (HA). We use 2 controllers with a loadbalancer (HAProxy) and a virtual ip managed by keepalived. We have not setup DVR yet, because we require approval for promiscuous mode from our IT department.

2016-07-03 15:40:34 -0500 获得奖牌  受欢迎的问题 (source)
2016-07-01 01:46:30 -0500 问了问题 Does DVR require promiscuous mode on compute nodes?

A have a Mitaka HA-deployment running and what to use DVR. I am using Open vSwitch with GRE. But I have a small question. Hopefully, I got it right: In order to setup DVR, each compute nodes requires a connection to the external network and a br-ex attached to this port. In this case the port must have the promiscuous mode enabled. Is this right?

2016-06-20 09:52:53 -0500 问了问题 Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities

I have OpenStack Mitaka running on Ubuntu 16.04. It is an HA deployment with 2 controllers and 3 compute nodes. I am using OpenSwitch. Everything seems to be working, but on one compute node I have an error in the neutron-openvswitch-agent.log:

2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher [-] Exception during message handling: Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher incoming.message))
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 194, in _dispatch
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher raise UnsupportedVersion(version, method=method)
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher UnsupportedVersion: Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher

I have restarted nova-compute and neutron-openvswitch-agent on the compute node, but the problem remains.

These are my configuration files on the compute node:

neutron.conf:

[DEFAULT]
auth_strategy = keystone
debug = false
host = os-compute01
rpc_backend = rabbit

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[keystone_authtoken]
auth_type = password
auth_uri = http://os-controller:5000
auth_url = http://os-controller:35357
memcached_servers = os-memcache:11211
password = ******
project_domain_name = default
project_name = service
user_domain_name = default
username = neutron

[oslo_concurrency]
lock_path = $state_path/lock

[oslo_messaging_notifications]
driver = messagingv2

[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_hosts = os-rabbit01:5672,os-rabbit02:5672
rabbit_max_retries = 0
rabbit_password = ******
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_userid = neutron
rabbit_virtual_host = openstack

openvswitch_agent.ini:

[agent]
arp_responder = true
l2_population = true
tunnel_types = gre

[ovs]
integration_bridge = br-int
local_ip = 10.30.19.50
tunnel_bridge = br-tun

[securitygroup]
enable_ipset = true
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

I have added some debug output in the oslo dispatcher sources:

message: {u'args': {u'service_name': u'volume', u'host': u'os-blockstorage@rbd', u'capabilities': {u'filter_function': None, u'goodness_function': None, u'multiattach': False, u'volume_backend_name': u'RBD', u'free_capacity_gb': 1446.45, u'driver_version': u'1.2.0', u'total_capacity_gb': 1446.45, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'ceph'}}, u'version': u'2.0', u'method': u'update_service_capabilities'} 
method: update_service_capabilities
namespace: None
version: 2.0
endpoint: <neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent object at 0x7f69ed001450>
target: <Target version=1.4>

What is going on here? Is this right? The message contains 'os-blockstorage@rbd'. In my deployment, each controller runs a cinder-volume service, but because of load balancer issues both services use the same hostname that is mapped to the VIP, otherwise Cinder causes issues when one controller node fails. The endpoint that gets checked is the OVSNeutronAgent, which has version 1.4.

Has anybody an idea how to solve this issue?

2016-06-16 13:55:11 -0500 获得奖牌  著名问题 (source)
2016-06-15 06:27:22 -0500 获得奖牌  著名问题 (source)
2016-06-09 09:21:21 -0500 回答问题 Using http proxy in Horizon for image download

I think I can answer my own question. The problem was a misconfiguration of Glance, Horizon has nothing to do with the issue described above.

glance-api.conf requires the following additional entries:

stores = file,http,...
http_proxy_information = http:os-proxy:3128,https:os-proxy:3128,ftp:os-proxy:3128

Now the image file gets downloaded from the remote location specified in Horizon's UI.

2016-06-09 07:19:55 -0500 问了问题 Using http proxy in Horizon for image download

We must use an http proxy to get access to the internet. I have set up OpenStack Mitaka, but I cannot find a way to tell Horizon (Dashboard) to use our http proxy for image creation if someone stated a valid url in the UI as image location.I have already created a proxy.sh script in the folder etc/profile.d which exports the variables http_proxy, https_proxy and no_proxy. When I use the glance CLI for image creation, everything is working as expected and the image gets downloaded via the http proxy, but it does not work from within Horizon's UI.

Has someone an idea how to get image download working when Horizon has to use an http proxy?

2016-05-07 05:20:26 -0500 获得奖牌  热门的问题 (source)
2016-03-31 07:01:29 -0500 获得奖牌  收藏问题 (source)
2016-03-04 07:37:27 -0500 获得奖牌  热门的问题 (source)
2016-03-04 07:37:27 -0500 获得奖牌  著名问题 (source)
2016-03-04 07:37:22 -0500 获得奖牌  热门的问题 (source)
2016-01-22 13:15:28 -0500 获得奖牌  著名问题 (source)
2016-01-15 23:48:49 -0500 评论问题 Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id': Network is unreachable

No, I cannot ping 169.254.169.254. The routing table is empty completely empty. The reason for this is that I do not get any IP from the dhcp-agent. I made a tcpdump on the VMs tap-interface. The dhcp requests are received but not transmitted by the tap-interface. Any thought?

2016-01-15 16:28:20 -0500 获得奖牌  受欢迎的问题 (source)
2016-01-15 00:55:14 -0500 获得奖牌  热门的问题 (source)
2016-01-13 08:32:05 -0500 获得奖牌  好问题 (source)