Ask Your Question
0

devstack, multi-tenant, and metadata service

asked 2014-07-24 20:18:58 -0600

slogan621 gravatar image

updated 2014-07-25 17:22:50 -0600

I have a multi-host, stable/havana era cluster (based on git checkout 1b9859ca78e7ec2b751026dfcb1327fe7076c4f8 of devstack), one controller, one compute node (the controller also has compute running on it so VMs can be scheduled either on the controller node, or the compute node).

If I launch VMs using the demo tenant created by devstack, I have no problem getting VMs to reach the metadata service to obtain instance IDs. All is good, and I worked out the kinks of single tenant quite a while ago.

Where I run into problems is multi-tenant. If I try creating VMs using the admin user/tenant, or try creating a second user/tenant with roles identical to "demo" (I call it demo2), same thing - metadata service cannot be reached. In the console log of the cirros VM, see something like this:

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 9.14. request failed
failed 2/20: up 14.32. request failed

Surely someone has tried out multi-tenancy with devstack. The API descriptions for keystone are clear enough and I basically am creating the demo2 tenant and user using the same steps as devstack's lib/keystone script. I get no errors from keystone, all the VMs launch happily, and I see no reason if I create the demo2 user/tenant exactly the same way as demo is created, why it wouldn't work. For that matter, pretty stumped why admin user/tenant also can't reach metadata. Only possible answer is metadata is somehow bound to the demo tenant in someway, but that would seem silly given metadata service surely should be usable by multiple tenants, not just one.

Here is the core of the code I run after running stack.sh (first on the controller node, then on the compute node), to create the user2 tenant and user

...

keystone --os-username admin --os-password password tenant-create --name demo2 --description 'demo2'
demo2=`keystone --os-username admin --os-password password \
      tenant-list | grep " demo2 " | cut -c3-34`
invis=`keystone --os-username admin --os-password password \
      tenant-list | grep "invis" | cut -c3-34`
echo "demo2 tenant is $demo2"
echo "invis tenant is $invis"
keystone --os-username admin --os-password password user-create --name demo2 --pass password --email demo2@example.com
admin_role=`keystone --os-username admin --os-password password \
      role-list | grep " admin " | cut -c3-34`
member_role=`keystone --os-username admin --os-password password \
      role-list | grep " Member " | cut -c3-34`
another_role=`keystone --os-username admin --os-password password \
      role-list | grep " anotherrole " | cut -c3-34`
echo "Member role is $member_role"
echo "Admin role is $admin_role"
echo "Another role is $another_role"
demo2user=`keystone --os-username admin --os-password password \
      user-list | grep " demo2 " | cut -c3-34`
adminuser=`keystone --os-username admin --os-password password \
      user-list | grep " admin " | cut -c3-34`
echo "demo2 user is $demo2user"
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $member_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $adminuser --role $admin_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $another_role --tenant $demo2
keystone --os-username admin --os-password password user-role-add --user $demo2user --role $member_role --tenant $invis

...

The code from devstack that inspired the above is:

 create_keystone_accounts() {

    # admin
    ADMIN_TENANT=$(keystone ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-08-20 11:22:51 -0600

rossella-o gravatar image

Hello Slogan,

when you create a VM using another tenant, is the network that the VM uses connected to a router? If not you should set enable_metadata_network = True in the dhcp_agent.ini . If there's no router connected then the DHCP agent should take care of spawning the metadata proxy to be able to reach the MD server. That's why you need to set enable_metadata_network = True . Usually the l3 agent does that but if there's no router connected, the l3 agent can't take care of that.

edit flag offensive delete link more
0

answered 2014-09-08 16:40:19 -0600

slogan621 gravatar image

updated 2014-09-08 17:03:39 -0600

Ok, well, I seem to have an answer after a lot of brute force experimentation.

The way I solved this was by adjusting the localrc files on both controller, and changing how I create tenants/users and launch VMs. The key was to stop running all command line apps authenticated as "admin", but run some of them authenticated as the user (and associated tenant) that applied for the command being run. For example, launching VMs.

Here is my new controller local.conf, the key change was to disable q-l3 service, and remove MULTI_HOST = True (I was really confused by that, it doesn't have anything to do with running on more than one cluster node, but has to do with were services are available -- I want all services to run only on the controller).

[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-meta
enable_service neutron
disable_service q-l3

HOST_IP=192.168.3.3
FLAT_INTERFACE=eth4
FIXED_RANGE=10.4.128.0/24
NETWORK_GATEWAY=10.4.128.1
#FLOATING_RANGE=192.168.3.128/25
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
EXTRA_OPTS=(metadata_host=$HOST_IP)

Q_ML2_TENANT_NETWORK_TYPE=vxlan
SCHEDULER=nova.scheduler.filter_scheduler.FilterScheduler
IMAGE_URLS="http://cloud-images.ubuntu.com/quantal/current/quantal-server-cloudimg-amd64-disk1.img"
Q_PLUGIN=ml2
ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True
ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=False

Q_DHCP_EXTRA_DEFAULT_OPTS=(enable_metadata_network=True enable_isolated_metadata=True)

Note I incorporated rosella-o's suggestion by passing args via Q_DHCP_EXTRA_DEFAULT_OPTS - note the inclusion in addition to his suggestion of enable_isolated_metadata, because the comments in the .conf file said I needed to have that enabled also. I'm going to +1 his suggestion since (as best I can tell) it is a contributing factor, but I really had other dragons to slay before I could see things work.

For the compute node (note removed MULTI_HOST=True here too):

[[local|localrc]]
HOST_IP=192.168.3.4
FLAT_INTERFACE=eth4
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
NETWORK_GATEWAY=10.4.128.1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
DATABASE_TYPE=mysql
SERVICE_HOST=192.168.3.3
MYSQL_HOST=192.168.3.3
RABBIT_HOST=192.168.3.3
GLANCE_HOSTPORT=192.168.3.3:9292

Q_PLUGIN=ml2

# Timeouts
ACTIVE_TIMEOUT=120
ASSOCIATE_TIMEOUT=60
BOOT_TIMEOUT=120
SERVICE_TIMEOUT=120
OFFLINE=True

ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGES=1:1000
ENABLE_TENANT_VLANS=False

Q_ML2_TENANT_NETWORK_TYPE=vxlan

ENABLED_SERVICES=n-cpu,rabbit,neutron,n-api,q-agt

EXTRA_OPTS=(metadata_host=$SERVICE_HOST)

The next (I think major) change was to create separate tenants, users, and make sure to authenticate against them as the specific user for operations applied for the user (as opposed to what I was doing which was passing os-username = admin and os-password = admin to command line apps as arguments). I also needed to do the following:

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2> /dev/null
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2> /dev/null
nova secgroup-add-rule default udp 1 65535 0.0.0.0 ...
(more)
edit flag offensive delete link more

Comments

I should add that the error message related to inability to reach 169.254.169.254 was "no route". The route table in one of the VMs now shows this problem to have been fixed:

ci-info: |   2   | 169.254.169.254 | 10.0.0.4 | 255.255.255.255 |    eth0   |  U
slogan621 gravatar imageslogan621 ( 2014-09-08 16:48:07 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-07-24 20:18:58 -0600

Seen: 930 times

Last updated: Sep 08 '14