Unable to initialize cluster with kubeadm init

Hello All, I tried to initialize master using kubeadm and didn’t have success. I pretty much follow the direction given in the course with no luck. My kubernetes version is 1.17 and docker version is given below;

root@masternode:~# docker version
Client: Docker Engine - Community
Version: 19.03.4
API version: 1.40
Go version: go1.12.10
Git commit: 9013bf583a
Built: Fri Oct 18 15:54:14 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.4
API version: 1.40 (minimum version 1.12)
Go version: go1.12.10
Git commit: 9013bf583a
Built: Fri Oct 18 15:52:44 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
root@masternode:~#

Here is the error I am seeing.
root@masternode:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.2
W0114 16:19:56.864131 4561 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0114 16:19:56.864517 4561 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [masternode kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.2]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [masternode localhost] and IPs [192.168.56.2 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [masternode localhost] and IPs [192.168.56.2 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
W0114 16:20:12.736166 4561 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[control-plane] Creating static Pod manifest for “kube-scheduler”
W0114 16:20:12.746181 4561 manifests.go:214] the default kube-apiserver authorization-mode is “Node,RBAC”; using “Node,RBAC”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- ‘docker ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘docker logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
root@masternode:~#

Any help will be appreciated.

Thanks,

-Gary

Hello Gary,

have you deployed the kubelet
Can you do the following ?

ps -ef | grep -i kubelet
docker ps -a | grep kube | grep -v pause
systemctl status kubelet
contents of kubelet service.

Thanks

Hi Rahul,

Thanks for responding, my kubelet is running, here is the output from the command.
osboxes@masternode:~$ ps -ef | grep -i kubelet
root 971 1 3 13:51 ? 00:00:04 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf
root 3056 2986 10 13:51 ? 00:00:11 kube-apiserver --advertise-address=192.168.56.2 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
osboxes 5938 5639 0 13:53 pts/0 00:00:00 grep --color=auto -i kubelet

root@masternode:~# docker ps -a | grep kube | grep -v pause
ede458f31192 ff281650a721 “/opt/bin/flanneld -…” 2 minutes ago Exited (1) 2 minutes ago k8s_kube-flannel_kube-flannel-ds-amd64-4xb26_kube-system_f6e26c3d-d0b8-4a20-a8fd-6ccc4f46fe03_317
e96df2aa2b00 ff281650a721 “cp -f /etc/kube-fla…” 7 minutes ago Exited (0) 7 minutes ago k8s_install-cni_kube-flannel-ds-amd64-4xb26_kube-system_f6e26c3d-d0b8-4a20-a8fd-6ccc4f46fe03_6
1e267fab9a31 5dd8f24429b4 “kube-controller-man…” 7 minutes ago Up 7 minutes k8s_kube-controller-manager_kube-controller-manager-masternode_kube-system_3e1162bb3ea6460dd66807026d73aad5_66
6290b72aa7eb 628f0e52ae53 “kube-apiserver --ad…” 7 minutes ago Up 7 minutes k8s_kube-apiserver_kube-apiserver-masternode_kube-system_47bf04c3157d3878d8f27dfb39ef1363_8
73dd4d650560 303ce5db0e90 “etcd --advertise-cl…” 7 minutes ago Up 7 minutes k8s_etcd_etcd-masternode_kube-system_70e0e07d9de435e6bdc286b42c4533ff_6
c1fc7587e214 8d2e2e5a92ac “kube-scheduler --au…” 7 minutes ago Up 7 minutes k8s_kube-scheduler_kube-scheduler-masternode_kube-system_11d278345de05e1c5c61a63a8a1d78b2_57
94c87e814c79 628f0e52ae53 “kube-apiserver --ad…” 3 days ago Exited (255) 7 minutes ago k8s_kube-apiserver_kube-apiserver-masternode_kube-system_47bf04c3157d3878d8f27dfb39ef1363_7
fd0113dc26b5 5dd8f24429b4 “kube-controller-man…” 3 days ago Exited (255) 7 minutes ago k8s_kube-controller-manager_kube-controller-manager-masternode_kube-system_3e1162bb3ea6460dd66807026d73aad5_65
636d80af4ea5 8d2e2e5a92ac “kube-scheduler --au…” 3 days ago Exited (255) 7 minutes ago k8s_kube-scheduler_kube-scheduler-masternode_kube-system_11d278345de05e1c5c61a63a8a1d78b2_56
f533ce46dc57 303ce5db0e90 “etcd --advertise-cl…” 3 days ago Exited (255) 7 minutes ago k8s_etcd_etcd-masternode_kube-system_70e0e07d9de435e6bdc286b42c4533ff_5

root@masternode:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2020-01-21 13:51:33 EST; 8min ago
Docs: Kubernetes Documentation | Kubernetes
Main PID: 971 (kubelet)
Tasks: 18 (limit: 4672)
Memory: 114.0M
CGroup: /system.slice/kubelet.service
└─971 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/ku

Jan 21 13:58:56 masternode kubelet[971]: E0121 13:58:56.302787 971 summary_sys_containers.go:47] Failed to get system conta
Jan 21 13:59:00 masternode kubelet[971]: E0121 13:59:00.865935 971 pod_workers.go:191] Error syncing pod f6e26c3d-d0b8-4a20
Jan 21 13:59:06 masternode kubelet[971]: E0121 13:59:06.346557 971 summary_sys_containers.go:47] Failed to get system conta
Jan 21 13:59:13 masternode kubelet[971]: E0121 13:59:13.866910 971 pod_workers.go:191] Error syncing pod f6e26c3d-d0b8-4a20
Jan 21 13:59:16 masternode kubelet[971]: E0121 13:59:16.408874 971 summary_sys_containers.go:47] Failed to get system conta
Jan 21 13:59:25 masternode kubelet[971]: E0121 13:59:25.865862 971 pod_workers.go:191] Error syncing pod f6e26c3d-d0b8-4a20
Jan 21 13:59:26 masternode kubelet[971]: E0121 13:59:26.466415 971 summary_sys_containers.go:47] Failed to get system conta
Jan 21 13:59:36 masternode kubelet[971]: E0121 13:59:36.550348 971 summary_sys_containers.go:47] Failed to get system conta
Jan 21 13:59:38 masternode kubelet[971]: E0121 13:59:38.864223 971 pod_workers.go:191] Error syncing pod f6e26c3d-d0b8-4a20
Jan 21 13:59:46 masternode kubelet[971]: E0121 13:59:46.619008 971 summary_sys_containers.go:47] Failed to get system conta
lines 1-22/22 (EN

Please advise the next steps to debug?

When I am trying to run the following command to see pods running, here is the output:

root@masternode:~# kubectl get pods --all-namespaces
The connection to the server masternode:8080 was refused - did you specify the right host or port?

I am not sure why it is coming, any help?

can you check for the kubelet logs using journalctl?
journalctl -u kubelet (and paste the logs here)