Ask Your Question
0

Magnum not creating serviceaccount 'default'?

asked 2018-05-14 20:49:52 -0500

don gravatar image

updated 2018-05-14 21:27:07 -0500

I am running Queens. I have done these 3 commands (using latest fedora-atomic as image):

 openstack image create --min-disk 6 --disk-format raw --container-format bare --public --property os_type=linux --property os_distro='fedora-atomic' --file fedora-atomic-latest.raw fedora-atomic

openstack coe cluster template create k8s-atomic --image fedora-atomic --keypair default --external-network public --dns-nameserver 172.16.0.1 --flavor m1.small --docker-storage-driver overlay2 --volume-driver cinder --network-driver flannel --coe kubernetes

openstack coe cluster create k8s --cluster-template k8s-atomic --node-count 3 --master-count 1

After some time, all nodes come up, they can ping each other, the kubernetes services are running on them (according to systemctl status). OK, great. But when I try to use it, I get an error:

kubectl run -it --rm --restart=Never busybox --image=busybox sh
Error from server (Forbidden): pods "busybox" is forbidden: error looking up service account default/default: serviceaccount "default" not found

I'm running magnumclient 2.90 (python-magnumclient (2.9.0)) which should work with the RBAC.

Does anyone have any suggestions?

Things seem sane:

kubectl get nodes
NAME                        STATUS    ROLES     AGE       VERSION
k8s-brl5bjk2yvoz-minion-0   Ready     <none>    3d        v1.9.3
k8s-brl5bjk2yvoz-minion-1   Ready     <none>    3d        v1.9.3
k8s-brl5bjk2yvoz-minion-2   Ready     <none>    3d        v1.9.3

$ kubectl cluster-info
Kubernetes master is running at https://10.255.0.17:6443
Heapster is running at https://10.255.0.17:6443/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at https://10.255.0.17:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

$ kubectl get serviceaccounts --all-namespaces
NAMESPACE     NAME                   SECRETS   AGE
kube-system   coredns                1         3d
kube-system   heapster               1         3d
kube-system   kubernetes-dashboard   1         3d

its just there is no serviceaccount 'default'. I checked on the kube-apiserver and the controller-manager, both have the ServiceAccount line for admission control, and a key:

# cat /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--bind-address=0.0.0.0 --secure-port=6443 --insecure-bind-address=127.0.0.1 --insecure-port=8080"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--runtime-config=api/all=true --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP  --authorization-mode=Node,RBAC --tls-cert-file=/etc/kubernetes/certs/server.crt --tls-private-key-file=/etc/kubernetes/certs/server.key --client-ca-file=/etc/kubernetes/certs/ca.crt"

# cat controller-manager 
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect=true  --service-account-private-key-file=/etc/kubernetes/certs/server.key --root-ca-file=/etc/kubernetes/certs/ca.crt --cluster-signing-cert-file=/etc/kubernetes/certs/ca.crt --cluster-signing-key-file=/etc/kubernetes/certs/ca.key"


$ kubectl get secrets --all-namespaces
NAMESPACE     NAME                               TYPE                                  DATA      AGE
kube-system   coredns-token-ncbpl                kubernetes.io/service-account-token   3         3d
kube-system   heapster-token-fwf7l               kubernetes.io/service-account-token   3         10m
kube-system   kubernetes-dashboard-certs         Opaque                                0         3d
kube-system   kubernetes-dashboard-token-4gghg ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
0

answered 2018-05-15 17:14:40 -0500

don gravatar image

OK, found the answer. You must have the cert_manager_api = true label.

openstack coe cluster template create k8s-atomic --image fedora-atomic --keypair default --external-network public --dns-nameserver 172.16.0.1 --flavor m1.small --docker-storage-driver overlay2 --volume-driver cinder --network-driver flannel --coe kubernetes --labels cert_manager_api=true

If not, the controller doesn't have a ca.key file, and doesn't start. But you get no obvious errors.

I don't know why its not on by default if its needed, but, well...

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-05-14 20:49:52 -0500

Seen: 72 times

Last updated: May 15