Ask Your Question

aso726's profile - activity

2014-07-29 12:53:54 -0500 received badge  Famous Question (source)
2014-07-28 18:00:00 -0500 received badge  Notable Question (source)
2014-06-26 22:25:00 -0500 received badge  Notable Question (source)
2014-06-26 22:25:00 -0500 received badge  Popular Question (source)
2014-03-31 02:30:31 -0500 received badge  Popular Question (source)
2013-08-24 16:08:35 -0500 asked a question Do I need a separate database for each cinder instance?

I currently have one cinder instance but I need to create another one with a different endpoint. Do I need to point it to a different database?

2013-07-24 17:49:50 -0500 received badge  Student (source)
2013-05-30 13:32:02 -0500 received badge  Famous Question (source)
2013-05-14 15:50:47 -0500 received badge  Notable Question (source)
2013-05-14 11:30:30 -0500 received badge  Famous Question (source)
2013-05-13 09:52:00 -0500 received badge  Popular Question (source)
2013-05-09 17:54:58 -0500 answered a question How to configure the DHCP server so that it can assign IP to VMs?

the 169.254.169.254 is for the metadata service. that IP is proxy'd to the nova-api service running on port 8775 to grab metadata for cloud-userdata to set up post-install scripts. Don't worry about that yet when setting up dhcp. Are you using quantum or nova-network?

2013-05-09 16:29:12 -0500 asked a question How to set up metadata service on a flat network?

I've seen docs but they all say to set up an L3 agent. In our case, our VM's are all set up in a flat network. How do I set up a metadata agent with no L3 agent?

2013-05-08 14:36:29 -0500 answered a question openstack tries to use cache=none when using glusterfs, when it should be writethrough

Thanks for the reply eharney. I found online that O_DIRECT for fuse requires kernel version 3.4 or higher. I updated it from 3.2 to 3.5 and now it works fine. Thanks!

2013-05-07 13:30:03 -0500 received badge  Notable Question (source)
2013-05-06 16:10:18 -0500 received badge  Popular Question (source)
2013-05-03 16:49:24 -0500 asked a question openstack tries to use cache=none when using glusterfs, when it should be writethrough

Hi all,

I noticed in the source code for LibvirtDriver that there is already a statement put in to use cache=writethrough for nova-compute users using glusterfs. For some reason, my configuration isn't catching it and it's causing kvm to not able to load the VM.

root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# cat libvirt.xml 
<domain type="kvm">
  <uuid>648d9178-3099-466e-861e-0476b3e0aa24</uuid>
  <name>instance-0000002c</name>
  <memory>2097152</memory>
  <vcpu>1</vcpu>
  <sysinfo type="smbios">
    <system>
      <entry name="manufacturer">OpenStack Foundation</entry>
      <entry name="product">OpenStack Nova</entry>
      <entry name="version">2013.1</entry>
      <entry name="serial">0bd50aa7-214c-fe5b-662c-0025902d7f96</entry>
      <entry name="uuid">648d9178-3099-466e-861e-0476b3e0aa24</entry>
    </system>
  </sysinfo>
  <os>
    <type>hvm</type>
    <boot dev="hd"/>
    <smbios mode="sysinfo"/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
  </clock>
  <cpu mode="host-model" match="exact"/>
  <devices>
    <disk type="file" device="cdrom">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24/disk"/>
      <target bus="ide" dev="hda"/>
    </disk>
    <disk type="file" device="disk">
      <driver name="qemu" type="raw" cache="none"/>
      <source file="/var/lib/nova/mnt/6408bb2f62fd5c6a9277a4fcfd55f3a6/volume-4bc652a3-97ea-44a2-b12a-1e38993010a3"/>
      <target bus="virtio" dev="vda"/>
      <serial>4bc652a3-97ea-44a2-b12a-1e38993010a3</serial>
    </disk>
    <interface type="bridge">
      <mac address="fa:16:3e:64:a2:0a"/>
      <model type="virtio"/>
      <source bridge="br100"/>
      <filterref filter="nova-instance-instance-0000002c-fa163e64a20a"/>
    </interface>
    <serial type="file">
      <source path="/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24/console.log"/>
    </serial>
    <serial type="pty"/>
    <input type="tablet" bus="usb"/>
    <graphics type="vnc" autoport="yes" keymap="en-us" listen="10.16.201.1"/>
  </devices>
</domain>

Here's me manually trying to start the VM using that xml file.

root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh define libvirt.xml 
Domain instance-0000002c defined from libvirt.xml

root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh start instance-0000002c
error: Failed to start domain instance-0000002c
error: Unable to read from monitor: Connection reset by peer

After I change it to writethrough, it works fine:

root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# sed -i 's/cache="none"/cache="writethrough"/g' libvirt.xml 
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh define libvirt.xml 
Domain instance-0000002c defined from libvirt.xml

root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh start instance-0000002c
Domain instance-0000002c started

Does anyone know why it isn't catching that I'm using GlusterFS becaues it carely says it would in the source code?

/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py

    def disk_cachemode(self):
        if self._disk_cachemode is None:
            # We prefer 'none' for consistent performance, host crash
            # safety & migration correctness by avoiding host page cache.
            # Some filesystems (eg GlusterFS via FUSE) don't support
            # O_DIRECT though. For those we fallback to 'writethrough'
            # which gives host crash safety, and is safe for migration
            # provided the filesystem is cache coherant (cluster filesystems
            # typically are, but things like NFS are not).
            self._disk_cachemode = "none"
            if not self._supports_direct_io(CONF.instances_path):
                self._disk_cachemode = "writethrough"
        return self._disk_cachemode
2013-05-01 11:53:46 -0500 asked a question Instances in the same broadcast domain as the rest of the physical servers in my site

I'm a little confused as to how to set this up because there doesn't seem to be a lot of guides for FlatNetworking. If I install all components (all API's, rabbitmq, mysql) on a single physical server, which components do I need to install on my hypervisor (KVM)?

Right now I have two servers in my lab, I'm testing out with:

One regular server with a separate partition with the vg "nova-volumes" with two NIC's

Another server which I plan on using as the hypervisor also with two NIC's.

The hypervisor has one management IP set up on one NIC and the other interface is set strictly as a bridge with no static IP set.

What components do I need to install on my hypervisor? Is nova-compute a must? I'm a little confused sorry.