Ask Your Question

don's profile - activity

2019-01-07 15:20:15 -0600 edited question keystone_authtoken and duplicate option auth-url

I'm really confused by what config is needed in nova for authentication, specifically to neutron. I have the below, and I'm getting a 'DuplicateOptError: duplicate option: auth-url'

I'm referring to http://www.jamielennox.net/blog/2015/...

in general keystone is working, i can create tokens, i can do a nova list, glance list etc. But when i try to do $ openstack server create --nic net-id=d474f8db-171e-463d-9616-ca277c73e978 --flavor m1.tiny --image cirros n I get errors as nova tries to get a token to talk to neutron. I don't know what i'm doing wrong, can someone suggest?

This is on 'master' so code from git between liberty and mitaka.

[keystone_authtoken]
auth_plugin = v3password
auth_url = https://keystone/v3
memcached_servers = localhost:11211
project_name = service
user_domain_name = default
project_domain_name = default
user-name = nova
username = nova
password = MYPASS

[neutron]
auth_section = keystone_authtoken
region_name = RegionOne
service_metadata_proxy = true
metadata_proxy_shared_secret = MYSECRET
url = https://neutron
2019-01-07 15:11:14 -0600 edited answer vxlan peers not being created (mitaka openvswitch agent native driver)

You wrote :- Bridge br-tun Port "vxlan-0a640a97" Interface "vxlan-0a640a97" type: vxlan options: {df_default="true", in_key=flow, local_ip="xx.xx.xx.xx", out_key=flow, remote_ip="10.100.10.151"}

VTEPs across all landscape usually are located on the same network say 10.100.10.0/24. If for instance eth1 are
supposed to support VTEPs, corresponding IPs are supposed to be assigned before deployment starts ( at least on RDO)
2018-11-23 10:17:47 -0600 received badge  Self-Learner (source)
2018-11-22 05:16:14 -0600 received badge  Famous Question (source)
2018-09-27 14:05:17 -0600 received badge  Good Answer (source)
2018-07-10 07:13:57 -0600 received badge  Notable Question (source)
2018-07-10 07:13:57 -0600 received badge  Popular Question (source)
2018-06-24 01:23:06 -0600 received badge  Notable Question (source)
2018-05-24 21:20:49 -0600 received badge  Popular Question (source)
2018-05-17 13:28:12 -0600 answered a question Magnum enable cloud-controller-manager

OK, for the record, the answer is, you need:

[trust] cluster_user_trust=True

in your magnum config. I'm using kolla, so adding this to the override. Without it, magnum kubernetes doesn't work w/ openstack.

2018-05-17 09:06:32 -0600 commented question Magnum enable cloud-controller-manager

it seems enable_cluster_user_trust is not set, its not in globals.yml. And, its required.

2018-05-15 20:34:10 -0600 asked a question Magnum enable cloud-controller-manager

I see that magnum creates a file /etc/kubernetes/kube_openstack_config. But this doesn't seem to be used. I'm finding that my external-ip stay pending for ever on services using e.g. https://raw.githubusercontent.com/kub...

It seems that there should be a cloud-controller-manager running (https://kubernetes.io/docs/tasks/admi...), presumably w/ some access to that file.

The kube_openstack_config seems to be setup properly. And heat's bit is working (e.g. creating instances, creating load-balancers for the API). But the post-heat bits are not working.

External cloud managers were intro into 1.6 of Kubernetes, and I'm on 1.9.3.

I'm not clear on whether magnum should have enabled this, or whether I am to do it manually and feed it that file.

https://github.com/kubernetes/cloud-p... has the docs on the subject I guess.

Does anyone have any input? is anyone actually using magnum?

from looking @ the code, it seems like TRUST_ID needs to be set (and its empty, so the bash script on master/minion have not populated). I do have

magnum-api/magnum.conf:trustee_domain_admin_password = XXXX                                                                          
magnum-api/magnum.conf:trustee_domain_admin_name = magnum_trustee_domain_admin                                                                                           
magnum-api/magnum.conf:trustee_domain_name = magnum

i'm not sure if that is the trust_id referred to. If i do a 'trust list', I see two (both for heat). Was one of them meant to be passed?

I also see https://bugs.launchpad.net/magnum/+bu... and https://github.com/kubernetes/cloud-p... suggests that we are still using the inbuilt provider.

What is the TRUST_ID referred to the script? How would I populate that?

2018-05-15 17:14:40 -0600 answered a question Magnum not creating serviceaccount 'default'?

OK, found the answer. You must have the cert_manager_api = true label.

openstack coe cluster template create k8s-atomic --image fedora-atomic --keypair default --external-network public --dns-nameserver 172.16.0.1 --flavor m1.small --docker-storage-driver overlay2 --volume-driver cinder --network-driver flannel --coe kubernetes --labels cert_manager_api=true

If not, the controller doesn't have a ca.key file, and doesn't start. But you get no obvious errors.

I don't know why its not on by default if its needed, but, well...

2018-05-14 20:49:52 -0600 asked a question Magnum not creating serviceaccount 'default'?

I am running Queens. I have done these 3 commands (using latest fedora-atomic as image):

 openstack image create --min-disk 6 --disk-format raw --container-format bare --public --property os_type=linux --property os_distro='fedora-atomic' --file fedora-atomic-latest.raw fedora-atomic

openstack coe cluster template create k8s-atomic --image fedora-atomic --keypair default --external-network public --dns-nameserver 172.16.0.1 --flavor m1.small --docker-storage-driver overlay2 --volume-driver cinder --network-driver flannel --coe kubernetes

openstack coe cluster create k8s --cluster-template k8s-atomic --node-count 3 --master-count 1

After some time, all nodes come up, they can ping each other, the kubernetes services are running on them (according to systemctl status). OK, great. But when I try to use it, I get an error:

kubectl run -it --rm --restart=Never busybox --image=busybox sh
Error from server (Forbidden): pods "busybox" is forbidden: error looking up service account default/default: serviceaccount "default" not found

I'm running magnumclient 2.90 (python-magnumclient (2.9.0)) which should work with the RBAC.

Does anyone have any suggestions?

Things seem sane:

kubectl get nodes
NAME                        STATUS    ROLES     AGE       VERSION
k8s-brl5bjk2yvoz-minion-0   Ready     <none>    3d        v1.9.3
k8s-brl5bjk2yvoz-minion-1   Ready     <none>    3d        v1.9.3
k8s-brl5bjk2yvoz-minion-2   Ready     <none>    3d        v1.9.3

$ kubectl cluster-info
Kubernetes master is running at https://10.255.0.17:6443
Heapster is running at https://10.255.0.17:6443/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at https://10.255.0.17:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl get serviceaccounts --all-namespaces
NAMESPACE     NAME                   SECRETS   AGE
kube-system   coredns                1         3d
kube-system   heapster               1         3d
kube-system   kubernetes-dashboard   1         3d

its just there is no serviceaccount 'default'. I checked on the kube-apiserver and the controller-manager, both have the ServiceAccount line for admission control, and a key:

# cat /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--bind-address=0.0.0.0 --secure-port=6443 --insecure-bind-address=127.0.0.1 --insecure-port=8080"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--runtime-config=api/all=true --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP  --authorization-mode=Node,RBAC --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt"

# cat controller-manager 
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect=true  --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt --cluster-signing-cert-file=/etc/kubernetes/certs/ca.crt --cluster-signing-key-file=/etc/kubernetes/certs/ca.key"


$ kubectl get secrets --all-namespaces
NAMESPACE     NAME                               TYPE                                  DATA      AGE
kube-system   coredns-token-ncbpl                kubernetes.io/service-account-token   3         3d
kube-system   heapster-token-fwf7l               kubernetes.io/service-account-token   3         10m
kube-system   kubernetes-dashboard-certs         Opaque                                0         3d
kube-system   kubernetes-dashboard-token-4gghg ...
(more)
2017-07-05 02:10:21 -0600 received badge  Famous Question (source)
2017-07-04 07:12:24 -0600 received badge  Popular Question (source)
2017-07-04 07:12:24 -0600 received badge  Notable Question (source)
2017-07-04 07:12:24 -0600 received badge  Famous Question (source)
2017-06-06 22:33:37 -0600 received badge  Notable Question (source)
2017-06-06 22:33:37 -0600 received badge  Popular Question (source)
2017-06-06 20:01:35 -0600 received badge  Famous Question (source)
2017-02-09 12:18:37 -0600 received badge  Notable Question (source)
2017-02-09 12:18:37 -0600 received badge  Popular Question (source)
2016-11-16 02:43:42 -0600 received badge  Notable Question (source)
2016-11-16 02:43:42 -0600 received badge  Famous Question (source)
2016-11-16 02:43:42 -0600 received badge  Popular Question (source)
2016-07-07 06:19:35 -0600 received badge  Famous Question (source)
2016-06-23 11:41:16 -0600 commented question race condition on subnet overlap check with multiple neutron

if I comment out the check, then nothing bad happens for me. But that is not the solution. There must be some missing lock, is no one else hitting this?

2016-06-21 19:06:26 -0600 asked a question race condition on subnet overlap check with multiple neutron

So I have a heat template that creates a router, a network, a subnet. When I have a single server running neutron api, it works every time. When I have 3 servers running neutron api, load balanced, it fails nearly every time. The failure, shown below, is that the subnet overlaps with another on this network. But its checking against itself. E.g. there is a single subnet (the one we are adding).

Resources.Data Int Subnet2: Invalid Input For Operation: Requested Subnet With Cidr: 127.0.3.0/24 For Network: 3c445ce9-78c6-4489-B2c6-9d1bf814d33d Overlaps With Another Subnet

As you can see below, the CIDR it is complaining about is 127.0.3.0/24. Its already been created. The error is coming from db/ipam_backend_mixin.py, specifically _validate_subnet_cidr().

This method is called with 'thenetwork', 'thecidr'. 'thecidr' is already created on 'thenetwork', presumably in one of the other workers. But its honestly this subnet, its not a second subnet.

Does anybody have a suggestion for what is going wrong? If i stop the other 2 neutron api backends, the problem is gone. If i do this manually from the CLI, allowing time, it doesn't seem to show up either.

I have allow_overlapping_ips set to true (but that is not this case, its saying 'subnetA overlaps with subnetA on networkB').

$ os subnet list
+--------------------------------------+----------------------+--------------------------------------+-----------------+
| ID                                   | Name                 | Network                              | Subnet          |
+--------------------------------------+----------------------+--------------------------------------+-----------------+
| bef016c1-4104-4328-9656-10b17595fca3 | ext-subnet           | 3d624eca-75ae-480b-9ea3-776c9dac7da6 | 10.129.192.0/20 |
| 35b102e1-c748-4710-bb80-aa9145fd33b7 | 172.16.5.0/24        | ea53ed06-ef41-4e9a-97e4-f526ef8cd376 | 172.16.5.0/24   |
| 64688ccb-1246-4533-8e65-c0e78bde7ca8 | ddb-data_sub-subnet2 | 60553a87-e474-499b-bce8-939ee1842ec4 | 127.0.2.0/24    |
| 7860d0f9-79c0-4923-b6d8-618b7da304a7 | ddb-servce-subnet    | 20253cbc-d450-400e-9508-be80c124aad7 | 127.0.0.0/24    |
| a2607676-78ad-4bed-90ef-5ef8b591d6ca | ddb-data-int-subnet1 | bb11e0c0-45ba-4fff-b760-f09f41ee2605 | 172.16.3.0/24   |
| 141fcd08-d8d0-464d-b904-2b2b96b7aa35 | ddb-ctrl-subnet      | a08764d3-85e4-4be3-992a-6c453c98c1fb | 172.16.1.0/24   |
| e2553bc1-5d79-4c00-a91e-dae28a1f0e6d | ddb-data_int-subnet2 | 3c445ce9-78c6-4489-b2c6-9d1bf814d33d | 127.0.3.0/24    |
| 21dba339-4b44-42df-b32e-8e361552ce65 | ddb-vctrl-subnet     | ece00a54-059d-4e67-8fdd-788f52e47eb6 | 172.16.1.0/24   |
| 34c3bd1a-d907-41fc-ac0a-3de4da325b6a | ddb-data_sub-subnet1 | 4edb52c3-aba1-453d-9331-2149fbcdf83e | 127.0.1.0/24    |
+--------------------------------------+----------------------+--------------------------------------+-----------------+

d

2016-06-20 12:47:28 -0600 answered a question neutron-ovs-cleanup deletes flows on br-ex it doesn't own

well, i don't know why, but i comment out 'of_interface=native' and i'm working again.

2016-06-19 13:47:25 -0600 asked a question neutron-ovs-cleanup deletes flows on br-ex it doesn't own

So when my neutron l3 agent node boots, it calls neutron-ovs-cleanup. This in turn deletes the flows on br-int and br-ex.

But the problem is, its deleting the 'default' flow on br-ex. For me, I have an external port 'p3p1' in the bridge br-ex:

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex p3p1

now I can ping the world etc. The output of:

# ovs-ofctl -O Openflow13 dump-flows br-ex
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0xbdab986b5f5e98ef, duration=135.117s, table=0, n_packets=27, n_bytes=2070, priority=2,in_port=2 actions=drop
  cookie=0xbdab986b5f5e98ef, duration=135.481s, table=0, n_packets=14860, n_bytes=2385256, priority=0 actions=NORMAL

But, when i reboot, neutron-ovs-cleanup comes along, and after it says "Cleaning bridge: br-ex", I lose internet connectivity. And the reason is that the default flow is removed from br-ex (e.g. ovs-ofctl -O Openflow13 dump-flows br-ex now shows nothing).

What am I doing wrong? I have

bridge_mappings = physnet1:br-ex
ovsdb_interface = true
of_interface = native
integration_bridge = br_int

What am I doing wrong? If i now delete the bridge and add it back again, and add the external port, all is good again until next reboot.

2016-05-30 01:20:26 -0600 received badge  Notable Question (source)
2016-05-29 14:51:08 -0600 answered a question vxlan peers not being created (mitaka openvswitch agent native driver)

OK so the problem was kind of stupid :) but when neutron-server was started, it was missing --config=...ml2 --config=...openvswitch. E.g. it had the /etc/neutron/neutron.conf, but not the ml2/openvswitch ini files. This in turn causes no errors to occur anywhere, but it doesn't know what to do so it cowardly gives up :)

2016-05-27 16:51:32 -0600 commented question vxlan peers not being created (mitaka openvswitch agent native driver)

Good point. I just tried spin up instance, now i have tap#### in the br-int, but still no vxlan tunnel in the br-tun.

2016-05-27 08:06:31 -0600 received badge  Popular Question (source)
2016-05-27 07:18:03 -0600 commented answer vxlan peers not being created (mitaka openvswitch agent native driver)

OK i tried that, but no difference.

2016-05-27 07:16:27 -0600 commented answer vxlan peers not being created (mitaka openvswitch agent native driver)

This example is from my working system where the system has 2 IP, and its picking the other for the endpoint. But that system is working. There is no requirement for vxlan to be on a single subnet.

2016-05-26 22:53:21 -0600 asked a question vxlan peers not being created (mitaka openvswitch agent native driver)

My vxlan peers are not coming up (not being created). Below is the config of a compute node. There are no errors, it just doesn't create the vxlan tunnel interface. Can you suggest where to look?

(This is on Ubunto 16.04 with Mitaka)

If i look in the database:

mysql> select * from ml2_vxlan_endpoints;
+---------------+----------+---------+
| ip_address    | udp_port | host    |
+---------------+----------+---------+
| 10.100.10.122 |     4789 | nubo-19 |
| 10.100.10.123 |     4789 | nubo-20 |
| 10.100.10.124 |     4789 | nubo-21 |
| 10.100.10.126 |     4789 | nubo-23 |
| 10.100.10.64  |     4789 | nubo-5  |
| 10.100.10.65  |     4789 | nubo-6  |
| 10.100.10.67  |     4789 | nubo-8  |
| 10.100.10.68  |     4789 | nubo-9  |
| 10.100.10.75  |     4789 | nubo-16 |
| 167.88.32.134 |     4789 | nubo-4  |
+---------------+----------+---------+
10 rows in set (0.00 sec)

the node w/ the config below is nubo-23 (they are all the same).

I don't see any errors in the logs. Not a 1.

i'm expecting to see an entry in br-tun like:

Bridge br-tun Port "vxlan-0a640a97" Interface "vxlan-0a640a97" type: vxlan options: {df_default="true", in_key=flow, local_ip="xx.xx.xx.xx", out_key=flow, remote_ip="10.100.10.151"}

but i have no entries.

This is on Ubuntu 16.04.

mysql> select * from ml2_vxlan_allocations where allocated = 1;
+-----------+-----------+
| vxlan_vni | allocated |
+-----------+-----------+
|        63 |         1 |
+-----------+-----------+
1 row in set (0.00 sec)

ml2_conf.ini::

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch

[ml2_type_flat]
flat_networks = *

[ml2_type_vxlan]
vni_ranges = 1:32000
#vxlan_group = 239.1.1.2

[securitygroup]
enable_security_group = false

openvswitch_agent.ini::

[agent]
arp_responder = True
l2_population = True
minimize_polling = True
ovsdb_monitor_respawn_interval = 30
polling_interval = 5
prevent_arp_spoofing = False
tunnel_types = vxlan

[ovs]
bridge_mappings = physnet1:br-ex
enable_tunneling=True
local_ip = 10.100.10.126
of_interface = native
ovsdb_connection = tcp:127.0.0.1:6640
ovsdb_interface = native
tenant_network_type = vxlan
tunnel_id_ranges = 1:32000
tunnel_type = vxlan

[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
#firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = False

ovs-vsctl output::

1b3be5ce-7ae1-41b1-9bc0-7c4e2e088414
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "enp3s0f1"
            Interface "enp3s0f1"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.5.0"

ifconfig::

br-ex     Link encap:Ethernet  HWaddr d0:43:1e:d6:03:35 
          inet addr:10.100.10.126  Bcast:10.100.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:295129 errors:0 dropped:4 overruns:0 frame:0
          TX packets:45436 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:27115572 (27.1 MB)  TX bytes:9312921 (9.3 MB)

br-int    Link encap:Ethernet  HWaddr b2:e7:33:0a:8c:4f 
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric ...
(more)
2016-04-20 07:08:13 -0600 received badge  Famous Question (source)
2016-04-20 03:54:15 -0600 received badge  Nice Answer (source)
2016-04-19 23:06:52 -0600 received badge  Self-Learner (source)
2016-04-19 22:59:25 -0600 answered a question trouble migrating keystone database from juno to mitaka

OK in case someone else hits this. despite my connect line having charset=utf8, i needed to do an alter database before the upgrade:

alter database keystone DEFAULT CHARACTER SET utf8;
alter database keystone DEFAULT COLLATE utf8_general_ci;

and then the upgrade worked provided i did juno->liberty, and then liberty->mitaka. its not possible to do juno->mitaka intact. But the alter database commands allowed the varchar(id) and varchar(user_id) to match.

2016-04-19 19:26:55 -0600 received badge  Notable Question (source)
2016-04-18 16:53:59 -0600 received badge  Nice Question (source)
2016-04-18 13:46:13 -0600 received badge  Popular Question (source)
2016-04-17 21:49:35 -0600 asked a question trouble migrating keystone database from juno to mitaka

I have a juno installation of keystone (schema ver 55) and am trying to migrate to mitaka. if i run keystone-manage db_sync on mitaka, it fails with:

KeyError: <VerNum(56)>

OK, so lets try a migration to liberty first. That succeeds without error. Now lets try mitaka again.

2016-04-18 02:39:43.258 2860 ERROR keystone DBError: (_mysql_exceptions.IntegrityError) (1215, 'Cannot add foreign key constraint') [SQL: u'\nCREATE TABLE local_user (\n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tuser_id VARCHAR(64) NOT NULL, \n\tdomain_id VARCHAR(64) NOT NULL, \n\tname VARCHAR(255) NOT NULL, \n\tPRIMARY KEY (id), \n\tUNIQUE (domain_id, name), \n\tUNIQUE (user_id), \n\tFOREIGN KEY(user_id) REFERENCES user (id) ON DELETE CASCADE\n)\n\n']

ok, why is that?

if i run 'show engine innodb status' i get:

2016-04-17 23:09:32 7f2cd4aed700 Error in foreign key constraint of table keystone/local_user:
FOREIGN KEY(user_id) REFERENCES user (id) ON DELETE CASCADE ):
Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types
in the table and the referenced table do not match for constraint.
Note that the internal storage type of ENUM and SET changed in
tables created with >= InnoDB-4.1.12, and such columns in old tables
cannot be referenced by such columns in new tables.
See http://dev.mysql.com/doc/refman/5.6/en/innodb-foreign-key-constraints.html
for correct foreign key definition.

so it is complaining about the foreign key in the local_user table its trying to create.

if i look @ user:

mysql> describe user;
+--------------------+--------------+------+-----+---------+-------+
| Field              | Type         | Null | Key | Default | Extra |
+--------------------+--------------+------+-----+---------+-------+
| id                 | varchar(64)  | NO   | PRI | NULL    |       |
| name               | varchar(255) | NO   |     | NULL    |       |
| extra              | text         | YES  |     | NULL    |       |
| password           | varchar(128) | YES  |     | NULL    |       |
| enabled            | tinyint(1)   | YES  |     | NULL    |       |
| domain_id          | varchar(64)  | NO   | MUL | NULL    |       |
| default_project_id | varchar(64)  | YES  |     | NULL    |       |
+--------------------+--------------+------+-----+---------+-------+
7 rows in set (0.00 sec)
mysql> show index from user;                                                                                                                                                                                                    +-------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name  | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| user  |          0 | PRIMARY   |            1 | id          | A         |        1055 |     NULL | NULL   |      | BTREE      |         |               |
| user  |          0 | domain_id |            1 | domain_id   | A         |           4 |     NULL | NULL   |      | BTREE      |         |               |
| user  |          0 | domain_id |            2 | name        | A         |        1055 |     NULL | NULL   |      | BTREE      |         |               |
+-------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)

so, uh, any suggestions? its the 89-90 step that fails