Hello all, When I performed drain, pods of deployment is created as expected on . . .

양상헌:
Hello all,
When I performed drain,
pods of deployment is created as expected on the other worker node - Ok,
but pods( that are not belonging to deployment, replicaset) are just deleted.
I could see below WARNING,

root@master:~# k drain worker1 --force --ignore-daemonsets
node/worker1 already cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/nginx; ignoring DaemonSet-managed Pods: kube-system/kube-proxy-8wzrk, kube-system/weave-net-frrg4
evicting pod default/nginx
evicting pod default/mytest-86c79b65f8-5vz4d
evicting pod default/mytest-86c79b65f8-jn5np

ex: nginx pod(which I created it with kubectl run ) was just deleted, is this expected?
Is there any way to evicting such pod to other nodes, not deleted then?

Venkat:
Actually it will not get deleted with that command. can you check kubectl get pod?

Tej_Singh_Rana:
Hello, @양상헌
Direct created pod as an example:

$ kubectl run test-demo --image=nginx

has no guarantee to schedule in other nodes. After evicted from that node, it get deleted forever and not possible to recover back.
In case of, deployment, daemon sets and RC controllers etc… It takes care of it there workload resources.
If deployments’ pod get deleted from one node, then deployment controller will recreate into other nodes with help of scheduler as per availability of resources.

양상헌:
Thanks Tej,I found what you answered above , seeing in 127. Solution: Cluster Upgrade in CKA training,that’s what as expected for pod(which is not part of deployment,replicaset…) ,thanks~ ^^