Mmumshad' "Kubernetes the hard way": No resources found

Please help. I have been stuck with this for the last few days. I would be grateful if someone help.

I am setting up kubernets cluster using MMumshad’s Kubernetes The Hard way.
Everything went fine until in the section “Bootstrapping the Kubernetes Worker Nodes” the output shows “No resources found.” in both master node.

[root@master-1 ~]# kubectl get nodes --kubeconfig admin.kubeconfig
No resources found.

I have treid 2 times, but both time same error.
Please help. I am adding the output of verification command and service status.

*[root@master-1 ~]# ETCDCTL_API=3 etcdctl member list *
–endpoints=https://127.0.0.1:2379
–cacert=/etc/etcd/ca.crt
–cert=/etc/etcd/etcd-server.crt
–key=/etc/etcd/etcd-server.key

24bc842313481048, started, master-2, https://192.168.60.212:2380, https://192.168.60.212:2379
286e64734b70132a, started, master-1, https://192.168.60.211:2380, https://192.168.60.211:2379

[root@master-1 ~]# kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {“health”:“true”}
etcd-0 Healthy {“health”:“true”}

[root@loadbalancer ~]# curl https://192.168.60.230:6443/version -k
{
“major”: “1”,
“minor”: “13”,
“gitVersion”: “v1.13.0”,
“gitCommit”: “ddf47ac13c1a9483ea035a79cd7c10005ff21a6d”,
“gitTreeState”: “clean”,
“buildDate”: “2018-12-03T20:56:12Z”,
“goVersion”: “go1.11.2”,
“compiler”: “gc”,
“platform”: “linux/amd64”

[root@master-1 ~]# bash cert_verify.sh
This script will validate the certificates in master as well as worker-1 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation

  1. Verify certification in Master Node

  2. Verify certification in Worker-1 Node

Please select either the option 1 or 2

1
The selected option is 1, proceeding the certificate verification of Master node
CA cert and key found, verifying the authenticity
CA cert and key are correct
admin cert and key found, verifying the authenticity
admin cert and key are correct
kube-controller-manager cert and key found, verifying the authenticity
kube-controller-manager cert and key are correct
kube-proxy cert and key found, verifying the authenticity
kube-proxy cert and key are correct
kube-scheduler cert and key found, verifying the authenticity
kube-scheduler cert and key are correct
admin kubeconfig file found, verifying the authenticity
admin kubeconfig cert and key are correct
kube-proxy kubeconfig file found, verifying the authenticity
Exiting…Found mismtach in the kube-proxy kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-proxy-kubernetes-configuration-file

[root@master-2 ~]# bash cert_verify.sh
This script will validate the certificates in master as well as worker-1 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation

  1. Verify certification in Master Node

  2. Verify certification in Worker-1 Node

Please select either the option 1 or 2

1
The selected option is 1, proceeding the certificate verification of Master node
CA cert and key found, verifying the authenticity
CA cert and key are correct
kube-apiserver cert and key found, verifying the authenticity
kube-apiserver cert and key are correct
service account cert and key found, verifying the authenticity
Service Account cert and key are correct
ETCD cert and key found, verifying the authenticity
etcd-server.crt / etcd-server.key are correct
kube-controller-manager kubeconfig file found, verifying the authenticity
kube-controller-manager kubeconfig cert and key are correct
kube-scheduler kubeconfig file found, verifying the authenticity
kube-scheduler kubeconfig cert and key are correct
Systemd for ETCD service found, verifying the authenticity
Device “enp0s8” does not exist.
ETCD certificate, ca and key files are correct under systemd service
Exiting…Found mismtach in the ETCD initial-advertise-peer-urls / listen-peer-urls / listen-client-urls / advertise-client-urls. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server

[root@worker-1 ~]# bash cert_verify.sh
This script will validate the certificates in master as well as worker-1 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation

  1. Verify certification in Master Node

  2. Verify certification in Worker-1 Node

Please select either the option 1 or 2

2
The selected option is 2, proceeding the certificate verification of Worker-1 node
worker-1 cert and key found, verifying the authenticity
worker-1 cert and key are correct
worker-1 kubeconfig file found, verifying the authenticity
Exiting…Found mismtach in the worker-1 kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#the-kubelet-kubernetes-configuration-file

[root@master-1 ~]# systemctl status kube-scheduler -l
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-05-23 06:40:34 +06; 3 days ago
Docs: GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management
Main PID: 767 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─767 /usr/local/bin/kube-scheduler --kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig --address=127.0.0.1 --leader-elect=true --v=2

May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.490666 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: Get https://127.0.0.1:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.515794 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: Get https://127.0.0.1:6443/api/v1/nodes?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.516658 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: Get https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.526508 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: Get https://127.0.0.1:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.526679 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: Get https://127.0.0.1:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.541320 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.542027 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://127.0.0.1:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.551868 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: Get https://127.0.0.1:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.552616 767 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: Get https://127.0.0.1:6443/api/v1/pods?fieldSelector=status.phase!%3DFailed%2Cstatus.phase!%3DSucceeded&limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:44:56 master-1 kube-scheduler[767]: E0526 06:44:56.563184 767 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://127.0.0.1:6443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)

[root@master-1 ~]# systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-05-23 16:44:52 +06; 2 days ago
Docs: GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management
Main PID: 15623 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─15623 /usr/local/bin/kube-apiserver --advertise-address=192.168.60.211 --allow-privileged=true --apiserver-count=2 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/var/lib/kubernetes/ca.crt --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-swagger-ui=true --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/kubernetes/ca.crt --etcd-certfile=/var/lib/kubernetes/etcd-server.crt --etcd-keyfile=/var/lib/kubernetes/etcd-server.key --etcd-servers=https://192.168.60.211:2379,https://192.168.60.212:2379 --event-ttl=1h --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml --kubelet-certificate-authority=/var/lib/kubernetes/ca.crt --kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt --kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key --kubelet-https=true --runtime-config=api/all=true --service-account-key-file=/var/lib/kubernetes/service-account.crt --service-cluster-ip-range=10.96.0.0/24 --service-node-port-range=30000-32767 --tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt --tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key --v=2

May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.881121 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40310: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.886064 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40312: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.887686 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40314: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.897277 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40318: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.897407 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40316: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.916046 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40322: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.916332 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40324: remote error: tls: bad certificate
May 26 06:52:43 master-1 kube-apiserver[15623]: E0526 06:52:43.916600 15623 cacher.go:272] unexpected ListAndWatch error: storage/cacher.go:/secrets: Failed to list *core.Secret: unable to transform key “/registry/secrets/default/default-token-rnzhx”: invalid padding on input
May 26 06:52:43 master-1 kube-apiserver[15623]: I0526 06:52:43.916678 15623 log.go:172] http: TLS handshake error from 127.0.0.1:40320: remote error: tls: bad certificate
May 26 06:52:44 master-1 kube-apiserver[15623]: E0526 06:52:44.010046 15623 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Secret: Internal error occurred: unable to transform key “/registry/secrets/default/default-token-rnzhx”: invalid padding on input

[root@master-1 ~]# systemctl status kube-controller-manager -l
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2021-05-23 06:40:34 +06; 3 days ago
Docs: GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management
Main PID: 759 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─759 /usr/local/bin/kube-controller-manager --address=0.0.0.0 --cluster-cidr=192.168.60.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/ca.crt --cluster-signing-key-file=/var/lib/kubernetes/ca.key --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig --leader-elect=true --root-ca-file=/var/lib/kubernetes/ca.crt --service-account-private-key-file=/var/lib/kubernetes/service-account.key --service-cluster-ip-range=10.96.0.0/24 --use-service-account-credentials=true --v=2

May 26 06:55:13 master-1 kube-controller-manager[759]: E0526 06:55:13.388171 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:17 master-1 kube-controller-manager[759]: E0526 06:55:17.236227 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:20 master-1 kube-controller-manager[759]: E0526 06:55:20.534344 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:24 master-1 kube-controller-manager[759]: E0526 06:55:24.463785 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:27 master-1 kube-controller-manager[759]: E0526 06:55:27.796358 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:30 master-1 kube-controller-manager[759]: E0526 06:55:30.066591 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:32 master-1 kube-controller-manager[759]: E0526 06:55:32.703914 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:34 master-1 kube-controller-manager[759]: E0526 06:55:34.979178 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:38 master-1 kube-controller-manager[759]: E0526 06:55:38.432907 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)
May 26 06:55:41 master-1 kube-controller-manager[759]: E0526 06:55:41.798146 759 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “KUBERNETES-CA”)

[root@worker-1 ~]# systemctl status kube-proxy -l
● kube-proxy.service - Kubernetes Kube Proxy
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2021-05-26 16:41:11 +06; 2h 59min ago
Docs: GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management
Main PID: 764 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
‣ 764 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/kube-proxy-config.yaml

May 26 19:39:30 worker-1 kube-proxy[764]: E0526 19:39:30.494822 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://192.168.60.230:6443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: i/o timeout
May 26 19:39:32 worker-1 kube-proxy[764]: E0526 19:39:32.499152 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://192.168.60.230:6443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host
May 26 19:39:41 worker-1 kube-proxy[764]: E0526 19:39:41.471703 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: i/o timeout
May 26 19:40:03 worker-1 kube-proxy[764]: E0526 19:40:03.501019 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://192.168.60.230:6443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: i/o timeout
May 26 19:40:05 worker-1 kube-proxy[764]: E0526 19:40:05.505328 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Endpoints: Get https://192.168.60.230:6443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host
May 26 19:40:12 worker-1 kube-proxy[764]: E0526 19:40:12.474722 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: i/o timeout
May 26 19:40:13 worker-1 kube-proxy[764]: E0526 19:40:13.477110 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host
May 26 19:40:14 worker-1 kube-proxy[764]: E0526 19:40:14.479392 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host
May 26 19:40:15 worker-1 kube-proxy[764]: E0526 19:40:15.482968 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host
May 26 19:40:16 worker-1 kube-proxy[764]: E0526 19:40:16.485371 764 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get https://192.168.60.230:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.60.230:6443: connect: no route to host

[root@worker-1 ~]# systemctl status kubelet -l
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2021-05-26 19:38:29 +06; 2min 10s ago
Docs: GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management
Main PID: 47464 (kubelet)
CGroup: /system.slice/kubelet.service
└─47464 /usr/local/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --image-pull-progress-deadline=2m --kubeconfig=/var/lib/kubelet/kubeconfig --tls-cert-file=/var/lib/kubelet/worker-1.crt --tls-private-key-file=/var/lib/kubelet/worker-1.key --network-plugin=cni --register-node=true --v=2 --cgroup-driver=systemd

May 26 19:40:38 worker-1 kubelet[47464]: E0526 19:40:38.806730 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:38 worker-1 kubelet[47464]: E0526 19:40:38.907462 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.009092 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.110879 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.211991 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.312559 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.412911 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.513524 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.615315 47464 kubelet.go:2266] node “worker-1” not found
May 26 19:40:39 worker-1 kubelet[47464]: E0526 19:40:39.715844 47464 kubelet.go:2266] node “worker-1” not found
[root@worker-1 ~]#

I listed some steps of troubleshooting:

1- make sure that the haproxy service is up, and running

2- make sure that the port 6443 was opened on haproxy —> ss -tlpen | grep 6443

3- make sure that each master node is working well —> kubectl cluster-info

I experienced this same issue. I resolved is by deleting both worker nodes. Then repeated bootstrapping kubernetes workers and tls bootstrapping workers. It worked.