Ask Your Question

Michael Steffens's profile - activity

2017-06-30 07:46:19 -0600 received badge  Nice Question (source)
2015-10-13 09:43:16 -0600 received badge  Notable Question (source)
2015-10-12 23:32:20 -0600 received badge  Popular Question (source)
2015-10-12 11:10:40 -0600 asked a question How to authenticate to HTTP proxy with OpenStack Python CLIs?

I need to communicate to OpenStack API services (keystone, nova, ...) via a HTTP proxy requiring basic authentication.

Using enviroment variable

http_proxy=http://proxyuser:proxypass@proxy.mycompany.com:8080

doesn't do the trick, however. As I can observe using tcpdump, the Python client actually does route traffic to the proxy address, but it does not include any "Proxy-Authorization" headers. So all my attempts get blocked, for example:

$ keystone tenant-list
Authorization Failed: Proxy Authentication Required (HTTP 407)
2015-09-24 14:53:34 -0600 received badge  Popular Question (source)
2015-09-24 14:53:34 -0600 received badge  Famous Question (source)
2015-09-24 14:53:34 -0600 received badge  Notable Question (source)
2015-07-29 10:57:14 -0600 asked a question "ImportError: No module named config": Installing virtualenv sahara on Ubuntu 14.04

I'm facing the same issue the one answered in https://ask.openstack.org/en/question.... As recommended in the current Sahara installation guide I'm using a Python virtual environment, however. Trying to set up the DB then results in pretty much the same failure:

root@cloudcont1:/opt# sahara-venv/bin/sahara-db-manage --config-file sahara-venv/etc/sahara.conf upgrade head
Traceback (most recent call last):
  File "sahara-venv/bin/sahara-db-manage", line 7, in <module>
    from sahara.db.migration.cli import main
  File "/opt/sahara-venv/local/lib/python2.7/site-packages/sahara/db/__init__.py", line 20, in <module>
    from sahara.db.api import *  # noqa
  File "/opt/sahara-venv/local/lib/python2.7/site-packages/sahara/db/api.py", line 34, in <module>
    from oslo.config import cfg
ImportError: No module named config

However, reinstalling doesn't fix anything, as

root@cloudcont1:/opt# sahara-venv/bin/pip install oslo.config

results in package files being placed into

sahara-venv/lib/python2.7/site-packages/oslo_config/

Please note the underscore separating "oslo" and "config", rather than a proper path separator.

This is probably the reason why the module isn't found, isn't it? But why does the virtualenv pip produce it, and how can I fix it?

2015-05-19 22:28:56 -0600 received badge  Good Question (source)
2015-02-23 07:55:38 -0600 commented answer How to isolate tenant networks connected to a shared router?

I have appended example commands how to reproduce the odd configuration. I was kind of surprised that it works myself. If we could (re)establish isolation between tenant networks while sharing the router, this would be really great.

2015-02-23 06:19:09 -0600 commented answer How to isolate tenant networks connected to a shared router?

Quite straightforward to set up. The router was created in the admin tenant. Tenant (sub)nets were also created by a user logged into the admin tenant, but were placed in their respective tenants using the --tenant-id option for create commands. You can then add router interfaces to these subnets.

2015-02-23 04:21:42 -0600 received badge  Nice Question (source)
2015-02-18 02:05:19 -0600 commented answer Is it possible to run a tenant router without consuming a floating ip as gateway address?

My wording was a quite sloppy, sorry, referring to the external router interface as gateway. I understand your answer as the router needs its own floating IP is outbound traffic of instances, which can't provide a floating IP of their own. Otherwise it could theoretically do without, right?

2015-02-13 05:56:13 -0600 received badge  Famous Question (source)
2015-02-13 02:41:15 -0600 received badge  Popular Question (source)
2015-02-13 02:41:15 -0600 received badge  Notable Question (source)
2015-02-12 06:27:07 -0600 asked a question How to isolate tenant networks connected to a shared router?

Is it possible to isolate tenant networks connected to a shared router? The idea is to minimize the number of routers and router IPs allocated in the external subnet, while still blocking any traffic between tenant networks.

I have tried a configuration as shown in the diagram below

Shared router wi

ext-net, ext-subnet, and the shared router are assigned to the admin or service tenant. The tenant networks and subnets belong to their respective tenants. I am aware that this configuration can't separate on a namespace level owned by the router, so tenant CIDR's must not overlap. This would be the price to pay for sharing a router and its external IP.

The setup works fine. Tenant instances can be attached to their respective network, can get floating IPs assigned, and can communicate both inbound and outbound. Unfortunately they can also talk to private IPs in the other tenant, just as if they were connected to a shared network. With the exception of broadcast probably not being propagated between them.

Is there a possibility to block traffic between tenant networks entirely? The solution should not depend on VLANs being configured on physical switches. It should also not depend on security groups applied to instance interfaces (which would be available as well in a shared network), to avoid tenant "cross-talk" caused by tenant internal misconfiguration.

Example: commands below create a configuration as described here, assuming

  • there is an external network named "ext-net" (created according to installation guide)
  • there are two tenants named "tenant-a" and "tenant-b"
  • Environment variables OS_USER_NAME and OS_TENANT_NAME are set to log into the admin tenant, where the user is configured to be admin (again, like described in the installation guide)

then run

neutron router-create shared-router
neutron router-gateway-set shared-router ext-net

tenant_a_id=$(keystone tenant-list|awk '/tenant-a/ {print $2}')
neutron net-create --tenant-id $tenant_a_id tenant-a-net
neutron subnet-create --tenant-id $tenant_a_id --name tenant-a-subnet tenant-a-net 192.168.1.0/24
neutron router-interface-add shared-router tenant-a-subnet

tenant_b_id=$(keystone tenant-list|awk '/tenant-b/ {print $2}')
neutron net-create --tenant-id $tenant_b_id tenant-b-net
neutron subnet-create --tenant-id $tenant_b_id --name tenant-b-subnet tenant-b-net 192.168.2.0/24
neutron router-interface-add shared-router tenant-b-subnet
2015-02-12 03:53:50 -0600 commented answer Is it possible to run a tenant router without consuming a floating ip as gateway address?

My wording was a quite sloppy, sorry. With "gateway address" I meant the external router interface. The scenario where the router needs its own floating IP is outbound traffic of instances, which can't provide a floating IP of their own. Otherwise it could theoretically do without, right?

2015-02-12 01:47:09 -0600 received badge  Famous Question (source)
2015-02-12 01:47:09 -0600 received badge  Notable Question (source)
2015-02-11 13:56:29 -0600 received badge  Popular Question (source)
2015-02-11 10:34:14 -0600 asked a question Is it possible to run a tenant router without consuming a floating ip as gateway address?

We are attempting to expose multiple separated tenant networks via an external subnet, having only a very tight pool of floating IPs available. So it's a real pain that every tenant router is allocating its own gateway address from this pool, without this address ever being used in any actual routing configuration.

Tenant routers, configured in a way as described in http://docs.openstack.org/trunk/insta... , have to provide each floating IP individually on their gateway interface anyway, in order to perform NAT to the corresponding instance private addresses.

Assuming we will never expose the tenant networks directly via the routers (which would imply the need for a gateway address), but always perform inbound connections using the instances' floating IPs and NAT, is there any way to configure neutron not to allocate gateway IPs?

2015-02-11 10:02:35 -0600 received badge  Famous Question (source)
2015-02-04 11:21:13 -0600 received badge  Notable Question (source)
2015-02-04 10:05:59 -0600 received badge  Scholar (source)
2015-02-04 04:02:00 -0600 commented answer Openstack global vs. project roles: What do they mean?

Thanks a lot for the clarification. Does this imply, that there is no special meaning of the tenant named "admin" (created in course of working through the installation guide)? Or is there anything being owned by this tenant, or why I should add a user to this tenant to grant admin rights?

2015-02-02 20:40:07 -0600 received badge  Popular Question (source)
2015-02-02 04:23:37 -0600 asked a question Openstack global vs. project roles: What do they mean?

Could someone please shed some light on the role concept of Openstack? I'm confused by contradicting statements in the admin guide, strange behaviour of my Icehouse installation, and various forum responses regarding this topic.

The admin guide states that role assignments are always done in project/tenant scope. I would conclude, that the individual assignment should also only takes effect in that scope. The admin guide does not mention any concept of global roles.

However, the keystone API description does:

GET v2.0/users/{userId}/roles​?serviceId=string List global roles for a user.

but https://ask.openstack.org/en/question... claims this actually doesn't work.

To make thing worse, here is what I observe with my Keystone setup, using horizon's identity panel. Let "michael" be my user name, with the following project and role assignments, done by admin user (who in turn is admin in tenant admin):

  • demo1: _member_, admin
  • demo2: _member_
  • demo3:

now, logging in as user michael, I'm getting demo1 and demo2 in the context selection box at the top. That's fine, as I'm only member of these. Selecting demo2, I'm not getting any Identity panel, which also looks plausible, as I'm not admin in that project.

However, if I select demo1, I'm getting the Identity panel, and that is presenting me all projects including demo3! Plus, it allows me to assign myself to demo3, and to grant myself any role I like in demo3 and demo2!!

That's disturbing. It looks like I acquired global admin power across all projects by just being assigned to admin role in one of them? Is that intended behavior, or is it a bug?

Being there, what is the purpose of the "admin" tenant? Is being a member or an admin of this tenant supposed to introduce any more power than being admin of a mortal project? And if so, what extra power, and how is that accomplished?

Thanks a lot in advance! Michael

2014-11-05 10:02:29 -0600 commented answer Juno Authentication to Horizon error - too many connections

Same problem here, but the number of connections just keeps growing until hitting whichever max_connections limit. The issue rather seems to be connections never being closed.

2014-09-02 04:19:39 -0600 received badge  Famous Question (source)
2014-09-02 04:19:39 -0600 received badge  Notable Question (source)
2014-08-10 21:16:44 -0600 received badge  Teacher (source)
2014-08-08 15:42:52 -0600 received badge  Student (source)
2014-08-08 15:26:14 -0600 received badge  Popular Question (source)
2014-08-06 07:41:27 -0600 received badge  Supporter (source)
2014-08-06 07:19:35 -0600 answered a question external network configuration on icehouse ver.

You don't mention what exactly the failure is. But if you have set up external and private (tenant) network according to the example in the installation guide, the external network belong to admin tenant while the private network should be owned by non-admin tenant.

A non-admin user can't attach instances directly to the external network then, but only to his tenant network.

2014-08-06 03:44:24 -0600 asked a question Why is the general purpose architecture considered inappropriate for increased security needs?

The OpenStack Architecture Guide states in the section about the general purpose architecture:

"For cloud domains that require increased computer security, network security, or information security, general purpose cloud is not considered an appropriate choice."

It doesn't explain why, and what this architecture's shortcomings are. I also didn't find any hint in the other compute- storage- or network focussed sections, why these would be superior. Their security aspects are just described in more detail.

Could anyone elaborate on the inappropriateness of the general purpose architecture, and what to watch out for?

2014-07-31 04:43:43 -0600 answered a question Icehouse: How to boot a snapshot from a running instance

It's a race condition triggered when nova converts base images after download from glance: See bug #1350766

2014-07-24 06:00:38 -0600 received badge  Enthusiast
2014-07-23 10:29:20 -0600 edited question Icehouse: How to boot a snapshot from a running instance

Update 3: Again the question needs to be rephrased a little. It turns out that my "unbootable" snapshots in QCOW2 format do boot easily, just not inside nova. In fact, if I run

sudo virt-install --connect qemu:///system --name *SomeName* --ram=1024 --disk path=*snapshotname.qcow2*,format=qcow2 --import --virt-type kvm --graphics vnc

the VNC window pops up with the OS booting like a charm, using the very same image file snapshotname.qcow2, that nova refuses to recognize as a bootable disk. On the other hand, omitting the format=qcow2 spec for the image file reproduces the boot failure, which is line with virt-install's documentation. Quoting the man page:


format

Image format to be used if creating managed storage. For file volumes, this can be 'raw', 'qcow2', 'vmdk', etc. See format types in http://libvirt.org/storage.html for possible values. This is often mapped to the driver_type value as well.

With libvirt 0.8.3 and later, this option should be specified if reusing an existing disk image, since libvirt does not autodetect storage format as it is a potential security issue. For example, if reusing an existing qcow2 image, you will want to specify format=qcow2, otherwise the hypervisor may not be able to read your disk image.


My libvirt version is 1.2.2. What is the difference in nova boot an image to virt-install doing it? Does anyone else also see this issue?

Update 2: The issue seems to be unrelated to image down- or uploading, except for the partial workaround described below. But I can also reproduce this issue by attempting to launch snapshot directly after creation from the running instance. Rephrasing the question title accordingly. Any ideas?

I'm trying to create a qcow2 disk image file from a snapshot, that can be used to create a bootable image in OpenStack again:

  1. Created an image from stock trusty-server-cloudimg-amd64-disk1.img.

  2. The file command identifies the format as "QEMU QCOW Image (v2), 2361393152 bytes"

  3. Booted instance (succcessful)

  4. Logged in and performed additional installation/configuration (successful: instance still running, reboots succesfully)

  5. Shut down instance

  6. Create snapshot

  7. Download snapshot using glance image-download *snapname* --file *snapname.qcow2* --progress

The resulting file is now identified as "QEMU QCOW Image (unknown version)" without any size information. Consequently, after reimporting the file using

glance image-create --name *newname* --file *snapname.qcow2* --disk-format qcow2 --container-format bare --is-public True --progress

the new image's instance doesn't boot. On startup the instance complains "Boot failed: not a bootable disk".

I can, however, mount and browse the image file with guestmount command. It also has the expected size and is populated with the expected contents. It just seems to have lost its boot flag!

Is the procedure described supposed to work? What is the exact difference between the original and my downloaded file? And finally, how do do I get an image file that can boot?

Additional observation:

  • it doesn't make a difference whether the instance if running or shut down when the snaphot ...

(more)