Best practice is to have one container per Pod, for scaling up an application why can’t we put more container in a single POD. Can someone please explain or point to documentation/blog explaining the reasoning. I couldn’t get any satisfactory explanation so far on this topic.
It depends on what the other container is for. If you’re talking about cramming everything for an application into a single pod (e.g. web ui, app logic, db) then that’s a horrible anti-pattern. Your resources should absolutely be distributed for scaling, resilience, etc. However, if you’re talking logging, service mesh, etc. then multiple containers per pod is certainly an acceptable (and possibly necessary) pattern.
This bit is clear runlev14, My query is if we have a pod having a frontend container (one initially), now to scale up my application why can’t or rather should put multiple frontend containers in the very same pod .Why is it a best practice to create multiple pods having single container (frontend) for scaling an app.
@Prasun Barthwal How would the load balancing work in your frontend example? If you use a deployment to maintain
n pods, your service can load balance between the available pods. How would you do that if instead of 10 pods, you have 10 containers in one pod? If out of the 10 containers, 4 die due to some issues, how will you reboot just the containers that have died? The
restartPolicy applies to all containers in the Pod, so you can’t selectively restart containers. How will you dynamically scale your application? I think there are a lot of satisfactory explanations for not having multiple containers in a pod except for special purposes like proxying, log translation, etc.
I’d also add that there’s a security aspect. For example, you might (probably) want to segregate UI traffic from the app itself. Likewise with your database resources. Or maybe you want database resources on specialized nodes. The end answer imho is flexibility.
In addition to above, pod is the smallest unit that K8s administers.