Roshan Ranasinghe:
Guys, Any idea how to take a ETCD backup and then restore it sitting in a external node (not in Master) ?
Ravan Nannapaneni:
This URL has the way to take a snapshot
Ravan Nannapaneni:
For restoring, you need to check where the ETCD is running from and replace the data-dir with the new path where you restored the snapshot to.
Roshan Ranasinghe:
@Ravan Nannapaneni No …I am asking how to do " it sitting in a external node (not in Master)"… Otherwise I know how to take back and restore
Ravan Nannapaneni:
For backup, it doesnt matter, we are connecting to IP address and taking a snapshot. Its the same command.
Ravan Nannapaneni:
To restore, I am not sure, I guess we have to be on the node.
Amit Sharma:
One of the approaches can be to - Perform backup and restore on the base node. -
Copy the data-dir from base node to master and make changes to etcd.yaml in master node to update data-dir and token. -provide the master node IP with the etcd port in --endpoints
Hinodeya:
For external etcd, the process need more deeper operation normally it run as a deamon so you need to stop the daemon systemctl stop etcd
Hinodeya:
check if the processus currently running ps aux | grep etcd
Hinodeya:
delete the directory into /var/lib/etcd if you need to test the restoration process
Hinodeya:
Check cluster name :
ETCDCTL_API=3 etcdctl get cluster.name \
--endpoints=<https://127.0.0.1:2379> \
--cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/server.crt
--key=/etc/kubernetes/pki/etcd/server.key
Mayur Sharma:
@Roshan Ranasinghe Not sure of the whole process, but I think we can reuse the same data directory.
If ETCD is running in different box, we can take backup from any node using below commands,
ETCDCTL_API=3 etcdctl \
--endpoints=https://<IP-ETCD>:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/server.crt
--key=/etc/kubernetes/pki/etcd/server.key
snapshot save <location-to-local-folder>
For restore,
ETCDCTL_API=3 etcdctl \
--endpoints=https://<IP-ETCD>:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/server.crt
--key=/etc/kubernetes/pki/etcd/server.key
snapshot restore <location-to-same-local-folder>
Here we have not used the data-directory parameter, so it would use same data-directory with our backup.
Not sure would it work or not.
Roshan Ranasinghe:
@Mayur Sharma <http://thanks.it|thanks… But it> is not working . b’se paths are not found … like /etc/kubernetes/pki etc…
Hinodeya:
FIRST
ETCDCTL_API=3 etcdctl get cluster.name
–endpoints=https://10.0.1.101:2379
–cacert=/etc/kubernetes/pki/etcd/ca.crt
–cert=/etc/kubernetes/pki/etcd/etcd-server.crt
–key=/etc/kubernetes/pki/etcd/etcd-server.key
Second
ETCDCTL_API=3 etcdctl snapshot save /opt/etcd_bk.db
–endpoints=https://10.0.1.101:2379 \ ##################### ip VM etcd
–cacert=/etc/kubernetes/pki/etcd/ca.crt
–cert=/etc/kubernetes/pki/etcd/etcd-server.crt
–key=/etc/kubernetes/pki/etcd/etcd-server.key
sudo systemctl stop etcd
sudo rm -rf /var/lib/etcd
sudo ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd_bk.db
–initial-cluster etcd-restore=https://10.0.1.101:2380 \ ############### ip VM etcd
–initial-advertise-peer-urls https://10.0.1.101:2380 \ ############### ip VM etcd
–name etcd-restore
–data-dir /var/lib/etcd
sudo chown -R etcd:etcd /var/lib/etcd
sudo systemctl start etcd
ETCDCTL_API=3 etcdctl get cluster.name
–endpoints=https://10.0.1.101:2379
–cacert=/etc/kuberneted/pki/etcd/ca.crt
–cert=/etc/kuberntes/pki/etcd/server.crt
–key=/etc/kubernetes/pki/etcd/server.key
kubectl get all -A
Mani:
When i attended this ques in exam, i restored etcd snapshot on the external node and scp it to the master using the command scp -r <restored_snapshot> <user_id>@<master_ip>:<path>. And edited the mount path accordingly in etcd.yaml file
Mani:
for this method, we dont need key or cert_file of etcd to restore
Mayur Sharma:
@Hinodeya, as per @Roshan Ranasinghe following paths are not accessible from the othe node /etc/kubernetes/pki.
So, your commands would not work if this is the case.
Hinodeya:
Not if the etcd have been installed on the external vm
Hinodeya:
and running as a daemon