Ask Your Question

Davide's profile - activity

2019-01-10 10:28:36 -0600 received badge  Teacher (source)
2019-01-10 10:28:36 -0600 received badge  Self-Learner (source)
2017-05-02 04:05:08 -0600 received badge  Famous Question (source)
2016-08-02 05:04:05 -0600 received badge  Notable Question (source)
2016-06-06 09:19:48 -0600 received badge  Popular Question (source)
2016-05-30 09:00:00 -0600 answered a question Ephemeral disk formatted as vfat?

Hi,

after some more digging (in the source code) i have found that there is a default "mounts" configuration that looks like this:

defmnts = [["ephemeral0", "/mnt", "auto", defvals[3], "0", "2"],
          ["swap", "none", "swap", "sw", "0", "0"]]

thus the ephemeral disk gets auto mounted on /mnt, the strange thing is that a 10G device gets a vfat partition when is on "auto".

The following cloud-config change the default behaviour and allows custom config:

disk_setup:
  ephemeral0:
    type: 'gpt'
    layout: True
    overwrite: True

fs_setup:
  - label: opt
    filesystem: 'ext4'
    device: '/dev/vdb1'
    partition: 'auto'
    overwrite: True

mounts:
 - [ ephemeral0, null ]
 - [ /dev/vdb1, /opt, "ext4"]

A few things to note:

  • The ephemeral line on the "mounts" key overwrite the default configuration
  • The device name under "disk_setup" has to be called ephemeral0 (vdb wont work)
  • Our Trusty images contains a patched version of cloud-config, with a working cc_disk_setup.py
  • The sfdisk errors are only affecting xenial images, thus this config is failing on xenial images.

Cheers, Davide

2016-05-30 05:17:52 -0600 asked a question Ephemeral disk formatted as vfat?

Hello,

i'm trying to setup a flavor with ephemeral disks, however the ephemeral disk (vdb) is auto-magically formatted as vfat and mounted on /mnt.

$ mount | grep vdb 
/dev/vdb on /mnt type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)

There is no trace of the formatting and mounting of this partition in the logs (cloud-init + syslog), nor i can find anything in the cloud-init config files.

Using a disk_setup section in the cloud-config data fails:

2016-05-25 15:32:33,022 - util.py[WARNING]: Failed partitioning operation
Failed to partition device /dev/vdb
Unexpected error while running command.
Command: ['/sbin/sfdisk', '--Linux', '-uM', '/dev/vdb']
Exit code: 1
Reason: -
Stdout: ''
Stderr: "sfdisk: --Linux option is unnecessary and deprecated\nsfdisk: unsupported unit 'M'\n"

It is unclear if it fails because the device is already mounted or because unsupported options are passed to sfdisk.

We are using the Liberty release of Openstack and the tests were done with Ubuntu cloud images Trusty and Xenial releases. The problem is the same on both Ubuntu releases.

  • Does anyone know how can i stop this "vfat mount" from happening?
  • Why wrong options are passed to sfdisk?
  • Is this a bug, misconfiguration or a mix of both?

Thanks in advance for your help, Davide

2016-03-22 14:08:38 -0600 answered a question Magnum bay create fails with the error (a user and password or token is required. (HTTP 500))

Hi,

i think in magnum.conf the [trust] section is missing:

[trust]
trustee_domain_admin_password = %MAGNUM_DOMAIN_ADMIN_PASS%
trustee_domain_admin_id = %MAGNUM_DOMAIN_ADMIN_ID%
trustee_domain_id = %MAGNUM_DOMAIN_ID%

Of course the resources should also exists in Keystone. This follow the same logic as the heat stack domain. You need a magnum domain and a user which has the admin role for that domain.

2016-03-22 14:08:38 -0600 answered a question How to attached binary file as a userdata

Hi,

there are a two ways to pass a binary file to cloud-init, take a look at this:

http://cloudinit.readthedocs.org/en/l...

2013-11-25 09:27:08 -0600 received badge  Famous Question (source)
2013-11-13 15:19:13 -0600 received badge  Great Question (source)
2013-10-19 09:46:00 -0600 received badge  Good Question (source)
2013-08-14 06:42:06 -0600 commented question What causes Metadata service to be very slow?

Finally i've abandoned the ec2 metadata and configured the openstack config disk in nova. This solves any slowness at boot. EC2 metadata is still slow, but it's not a problem anymore. Also when using cloud-init data at boot, ec2 metadata becomes so slow that you can wait till 12-13 minutes before booting. So bye-bye EC2, welcome config disk!

2013-08-12 03:52:46 -0600 received badge  Notable Question (source)
2013-07-30 16:38:35 -0600 received badge  Nice Question (source)
2013-07-24 16:30:22 -0600 received badge  Famous Question (source)
2013-07-23 18:54:01 -0600 received badge  Popular Question (source)
2013-07-15 14:56:36 -0600 received badge  Student (source)
2013-07-12 18:50:05 -0600 received badge  Notable Question (source)
2013-07-04 07:37:05 -0600 received badge  Popular Question (source)
2013-07-02 04:35:04 -0600 asked a question What causes Metadata service to be very slow?

Hello,

on a fresh Grizzly deployment, our metadata service is very slow :

$ time curl http://169.254.169.254/2009-04-04/meta-data/instance-id
i-0000000d
real    0m16.295s
user    0m0.012s
sys 0m0.000s

This causes a very long boot time for instances (like 4-5 minutes).

At the moment we are not injecting any file or script, neither we are using cloud-init. Nothing in the logs seems to explain this issue.

What could be the cause of such long delays?

Thanks in advance, Cheers

2013-06-27 03:12:58 -0600 commented answer Missing OS-EXT-SRV-ATTR from a nova show

Thanks a lot for pointing this out! As always i test things wrongly, but somehow i manage to deploy them correctly ;)

2013-06-27 03:05:20 -0600 received badge  Supporter (source)
2013-06-26 03:56:09 -0600 asked a question Missing OS-EXT-SRV-ATTR from a nova show

Hello,

we are currently deploying OpenStack Grizzly. Everything is up and running fine, but the OS-EXT-SRV-ATTR are not generated when we start instances.

Here is a nova show output:

# nova show 6e3644f1-eddf-4c55-8a94-1834e106c348
+-----------------------------+----------------------------------------------------------+
| Property                    | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| status                      | ACTIVE                                                   |
| updated                     | 2013-06-24T13:07:15Z                                     |
| OS-EXT-STS:task_state       | None                                                     |
| key_name                    | ticloud-key                                              |
| image                       | raring-amd64 (7199fad3-22d9-41ad-96e8-712b443760c3)      |
| hostId                      | 03a189f14aa730bb93663ecb6c768b3523f70e14409e285a57449e39 |
| OS-EXT-STS:vm_state         | active                                                   |
| flavor                      | m1.small (2)                                             |
| id                          | 6e3644f1-eddf-4c55-8a94-1834e106c348                     |
| security_groups             | [{u'name': u'default'}]                                  |
| user_id                     | 0cad9ab8b83641a3a5b19b42a7cf0301                         |
| name                        | tms_prod                                                 |
| created                     | 2013-06-24T13:06:53Z                                     |
| tenant_id                   | ec2ac4cff7ea449987699e9a871fd3cb                         |
| OS-DCF:diskConfig           | MANUAL                                                   |
| metadata                    | {}                                                       |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| progress                    | 0                                                        |
| OS-EXT-STS:power_state      | 1                                                        |
| OS-EXT-AZ:availability_zone | nova                                                     |
| tms_prod_net network        | 10.10.20.3, 192.168.10.152                               |
| config_drive                |                                                          |
+-----------------------------+----------------------------------------------------------+

What are we missing to enable the generation of OS-EXT-SRV-ATTR attributes?

Thanks in advance, Cheers