Ask Your Question
0

magnum+ ussuri + Container Not booting up

asked 2020-07-04 10:34:55 -0500

Ammad gravatar image

updated 2020-07-04 23:02:08 -0500

Hi Guys,

I have installed openstack (keystone + glance + Nova (KVM) + Neutron (flat + vxlan) + Cinder (LVM backend)+ barbican = heat). The whole stack is working fine. I have installed it from guide in https://www.server-world.info/en/ for Centos 8 USSURI.

Now I have installed Magnum 10. The deployment it getting failed while configuring the coe kubernetes master node. I have installed magnum with guide https://docs.openstack.org/magnum/lat.... I am using fedora-atomic 27 image and stack deployment is getting failed at:

[root@controller ~(keystone)]# openstack stack resource list 0000719b-12a1-4462-84c6-9d0ec6f0963c -n 5 | grep -i failed
| kube_masters                  | 34d583d1-a7a3-4534-bc6b-8a3a1dd18ee8                                                | OS::Heat::ResourceGroup                                                                               | CREATE_FAILED   | 2020-07-04T10:20:39Z | kubernetes-cluster-c6lgugypw5cc                                          |
| 0                             | 3a99d1ef-52e9-4e94-b67b-e2fd6b5d662e                                                | file:///usr/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml | CREATE_FAILED   | 2020-07-04T10:20:51Z | kubernetes-cluster-c6lgugypw5cc-kube_masters-ptni65vpgv5m                |
| master_config_deployment      | 10ff5482-01dc-443a-b476-736b35424f39                                                | OS::Heat::SoftwareDeployment                                                                          | CREATE_FAILED   | 2020-07-04T10:20:52Z | kubernetes-cluster-c6lgugypw5cc-kube_masters-ptni65vpgv5m-0-6pbhyxk3kegh |

I have checked by ssh on deployed master and found below in heat in var log.

+ for service in etcd ${container_runtime_service} kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
+ echo 'restart service kube-proxy'
+ ssh -F /srv/magnum/.ssh/config root@localhost systemctl restart kube-proxy
restart service kube-proxy
++ curl --silent http://127.0.0.1:8080/healthz
+ '[' ok = '[+]ping ok
Trying to label master node with node-role.kubernetes.io/master=""
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
healthz check failed' ']'
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
++ curl --silent http://127.0.0.1:8080/healthz
+ '[' ok = ok ']'
+ kubectl patch node kubernetes-cluster-c6lgugypw5cc-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}'
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
+ echo 'Trying to label master node with node-role.kubernetes.io/master=""'
+ sleep 5s
Trying to label master node with node-role.kubernetes.io/master=""
++ curl --silent http://127.0.0.1:8080/healthz
+ '[' ok = ok ']'
+ kubectl patch node kubernetes-cluster-c6lgugypw5cc-master-0 --patch '{"metadata": {"labels": {"node-role.kubernetes.io/master": ""}}}'
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Ammad

edit retag flag offensive close merge delete

Comments

My memory of Magnum troubleshooting is very sketchy, but I would try to find out how the KUBERNETES_MASTER variable is supposed to get set, and from there find out why this didn't happen. I remember faintly that environment variables are stored somewhere in the Magnum controller.

Bernd Bausch gravatar imageBernd Bausch ( 2020-07-04 23:05:27 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2020-07-11 10:05:50 -0500

Reski gravatar image

Hi all, any solutions with this problem? I'm having the same problem using magnum + openstack ussuri. Always stuck in here

error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

edit flag offensive delete link more
0

answered 2020-07-05 02:58:42 -0500

Ammad gravatar image

Thanks Bernd,

I tried to find the variable, its defined in

/usr/lib/python3.6/site-packages/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh:309:KUBERNETES_MASTER=$KUBE_MASTER_URI

But I think this script is for configuring minion.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2020-07-04 10:34:55 -0500

Seen: 81 times

Last updated: Jul 04