If ETCD is hosted in a seperate node(other than master) and i know we can take t . . .

sarath s:
If ETCD is hosted in a seperate node(other than master) and i know we can take the backup by passing the certs.So after generating the backup in master node how we will initiate the ETCD restore with the backup file being in master node?.eg:If ETCD is not hosted as a static pod and is in another node…

I faced the same thing.

Deekshith Hadil:
@sarath s
Your question is how to restore backup of etcd taken on one node to another node?

Deekshith Hadil:
Understood your question now.
I would copy the backup file over to etcd node and would restore it using the usual etcdctl snapshot restore ./backup.db --datdadir=/some/dir
This is under the assumption that etcd is on single node. Not a cluster.

@sarath s @Muditha Galhena You can follow the steps in the LinuxAcademy CKA Practice Exam 4

essentially here are the commands

Certified Kubernetes Administrator (CKA) - Practice Exam Part 4
This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!
Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Back Up the etcd Data

  1. From the terminal, log in to the etcd server:
  2. ssh etcd1
  3. Back up the etcd data:
  4. ETCDCTL_API=3 etcdctl snapshot save /home/cloud_user/etcd_backup.db \
  5. –endpoints=https://etcd1:2379 \
  6. –cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  7. –cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  8. –key=/home/cloud_user/etcd-certs/etcd-server.key
    Restore the etcd Data from the Backup
  9. Stop etcd:
  10. sudo systemctl stop etcd
  11. Delete the existing etcd data:
  12. sudo rm -rf /var/lib/etcd
  13. Restore etcd data from a backup:
  14. sudo ETCDCTL_API=3 etcdctl snapshot restore /home/cloud_user/etcd_backup.db \
  15. –initial-cluster etcd-restore=https://etcd1:2380 \
  16. –initial-advertise-peer-urls https://etcd1:2380 \
  17. –name etcd-restore \
  18. –data-dir /var/lib/etcd
  19. Set database ownership:
  20. sudo chown -R etcd:etcd /var/lib/etcd
  21. Start etcd:
  22. sudo systemctl start etcd
  23. Verify the system is working:
  24. ETCDCTL_API=3 etcdctl get cluster.name \
  25. –endpoints=https://etcd1:2379 \
  26. –cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
  27. –cert=/home/cloud_user/etcd-certs/etcd-server.crt \
  28. –key=/home/cloud_user/etcd-certs/etcd-server.key

@NCSa should I quit ETCD while restoring?

Chi Bui:
this is used for 1 master, 1 base node cluster ( the last cluster) as mentioned in Exam tips, right?

Multi master nodes are outside the scope of the exam. In that case we’ll need to restore the snapshot on each peer