Samir:
Question: Restore the original state of the cluster using the backup file.
What I did:
root@controlplane:~# ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore /opt/snapshot-pre-boot.db
2021-06-01 12:59:04.425196 I | mvcc: restore compact to 1285
2021-06-01 12:59:04.435537 I | etcdserver/membership: added member 8e9e05c52164694d [<http://localhost:2380>] to cluster cdf818194e3a8c32
Source: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ . In the solution stands that I need to change the path in the etcd yaml file in etc/kubernetes/manifests . Do I need to do that always? Where can I find that in the documentation?
Samir:
Where do I know which endpoint I need to use in which case?
Mayur Sharma:
@Samir Describe the etcd pod and you would find all the endpoints, cacert, etcd cert and etcd key in its command.
All these you need to put in your etcdctl command
Samir:
I got the endpoint from the etdc pod. But it does not work actually
Samir:
Like I said in the solution stands that I need to change the path in the etcd yaml file in etc/kubernetes/manifests . Do I need to do that always? Where can I find that in the documentation?
Karim Meslem:
I think you should specify a data-dir instead of endpoints in case of restore.
You restore command should be something similar to this:
etcdctl --data-dir=/some/dir snapshot restore /opt/snapshot-pre-boot.db
Then edit hostPath value in the etcd static pod yaml in /etc/kubernetes/manifests so that it points to /some/dir
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /some/dir
type: DirectoryOrCreate
name: etcd-data
The latter is needed, else the applications won’t start up.