Deployment of Kubernetes & Helm failed when trying to install openstack Helm

asked 2020-02-18 02:56:45 -0500

jyesudian gravatar image

updated 2020-02-19 01:41:53 -0500

I have Ubuntu 18.04.3 LTS Server with 8 CPU & 32 GB RAM & 30 GB HDD (Virtual Instance in AWS Console) and following the link - https://docs.openstack.org/openstack-helm/latest/install/developer/kubernetes-and-common-setup.html#deploy-kubernetes-helm (https://docs.openstack.org/openstack-...)

I could successfully clone the repository and when I ran the script - ./tools/deployment/developer/common/010-deploy-k8s.sh, I get an error :

Makefile:59: recipe for target 'dev-deploy' failed make: * [dev-deploy] Error 2

TASK [deploy-kubeadm-aio-common : add labels to namespaces] ******************* failed: [local] (item=default) => {"changed": false, "cmd": "kubectl label --overwrite namespace default name=default", "item": "default", "msg": "[Errno 2] No such file or directory", "rc": 2} failed: [local] (item=kube-system) => {"changed": false, "cmd": "kubectl label --overwrite namespace kube-system name=kube-system", "item": "kube-system", "msg": "[Errno 2] No such file or directory", "rc": 2} failed: [local] (item=kube-public) => {"changed": false, "cmd": "kubectl label --overwrite namespace kube-public name=kube-public", "item": "kube-public", "msg": "[Errno 2] No such file or directory", "rc": 2}

Some more details of the error logs: "fatal: [/mnt/rootfs -> 127.0.0.1]: FAILED! => {\"attempts\": 120, \"changed\": true, \"cmd\": \"kubectl get node \\"ip-172-31-47-114.ec2.internal\\" --no-headers | gawk '{ print $2 }' | grep -q '\\(^Ready\\)\\|\\(^NotReady\\)'\", \"delta\": \"0:00:00.673220\", \"end\": \"2020-02-18 08:36:50.302105\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2020-02-18 08:36:49.628885\", \"stderr\": \"Error from server (NotFound): nodes \\"ip-172-31-47-114.ec2.internal\\" not found\", \"stderr_lines\": [\"Error from server (NotFound): nodes \\"ip-172-31-47-114.ec2.internal\\" not found\"], \"stdout\": \"\", \"stdout_lines\": []}", "\tto retry, use: --limit @/opt/playbooks/kubeadm-aio-deploy-master.retry", "", "PLAY RECAP ***********************", "/mnt/rootfs : ok=26 changed=23 unreachable=0 failed=1 " ] }

TASK [deploy-kubeadm-aio-common : exiting if deploy-kube action failed] *************** fatal: [local]: FAILED! => {"changed": false, "cmd": "exit 1", "msg": "[Errno 2] No such file or directory", "rc": 2}

TASK [deploy-kubeadm-aio-common : removing container for deploy-kube action]

May I know why it is failing here?

While checking the logs (/var/log/syslog), I see the errors as below. Does it indicate whether I miss anything?

Feb 19 07:35:56 ip-172-31-38-21 kubelet[24634]: E0219 07:35:56.116602 24634 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "ip-172-31-38-21.ec2.internal" is forbidden: User "system:node:ip-172-31-38-21" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting node Feb 19 07:35:56 ip-172-31-38-21 kubelet[24634]: E0219 07:35:56.151221 24634 kubelet.go:2267] node "ip-172-31-38-21.ec2.internal" not found

Feb 19 07:35:56 ip-172-31-38-21 kubelet[24634]: E0219 07:35:56.698107 24634 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Feb 19 07:35:56 ip-172-31-38-21 kubelet[24634]: E0219 07:35:56.752680 24634 kubelet.go:2267] node "ip-172-31-38-21.ec2.internal" not found

Feb 19 07:35:58 ip-172-31-38-21 kubelet[24634]: I0219 07:35:58.629848 24634 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach Feb 19 07:35 ... (more)

edit retag flag offensive close merge delete