Hey guys, I have a question about how k8s act under heavy loads when it has limi . . .

Uğurcan Çaykara:
hey guys,
I have a question about how k8s act under heavy loads when it has limited resources.
Let’s say , you have a node with 4gb ram and 2cpu . (to keep it simple I specified resources low) and a k8s cluster is running on it. We assigned this k8s cluster 4gb Ram and 2m cpu and at the moment 3,5gb is using by other apps on the cluster and 1,75m cpu also. So you have created an app and assigned its requests 0.25m cpu and 500mbi ram and to its limits 1000Mi ram and 0.50m cpu. So we made this resource to this application as guaranteed but if this app. needs the whole cpu and ram capacity that we guaranteed it (0.5m cpu and 1000Mi ram). Can it have the all resources it needed while 3.5gb ram and 1.75m cpu is already using by other apps. If the app is going to stop to it’s running (because of insufficient memory and cpu), so is that can be assume as guaranteed? How cpu or ram acts on these scenarios (from k8s perspective, is there sth like take 0.2m cpu and assign it to guaranteed app or)-> is app going to crash suddenly when it reached out it’s limit on both resources or both of them acts different from each other (cpu and ram)?

unnivkn:
@Uğurcan Çaykara Ahh… bit difficult to understand your question. Please refer this link & other links inside it to get some idea. https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/

Uğurcan Çaykara:
@unnivkn ok let me try to explain cleaner.
we have requests and limits that specifies our pods as best effort, burstable and guaranteed so
lets assume we’ve create a definition file for an application which has requests:

    resources:
      requests:
        memory: "512Mi"
        cpu: "250m"
      limits:
        memory: "1024Mi"
        cpu: "500m"

now to this part of definition file guarantees 1024mi memory and 500m cpu to application mentioned on the definition file.
and our cluster’s total resource -> 4096Mi memory and 2000m cpu.

Let’s say we have running application A app and B app and these application uses 3500Mi memory and 1750m cpu at the moment so we have 500Mi 250m cpu available on cluster, right?

Now, we wanted to create a new C application with the definition file I mentioned (a part of it actually) above. and we run ’ kubectl apply -f c-app-definition-file.yaml. It works fine.

Somehow C application starts to get heavy load and C applications starts to consumes more resources until 1024Mi memory and 500m cpu -> because we limited with these resources and
But if this application tries to consume more than 500Mi memory and 250m cpu which these resources guaranteed until 1000Mi memory and 500m cpu , what happens? We don’t have that available resource on the cluster, right? but this resources (on the definition files resources part) guaranteed to this C application. But this C app. can’t use it because there are no more available resource. So is this statuation can be called as guaranteed ? and how k8s acts under this circumstances ?

unnivkn:
@Uğurcan Çaykara well explained. So in this example node doesn’t have enough resource to allocate APP-C’s limit, so if it allocates its max limit it reaches server 100% capacity.
just sharing my thoughts.

  • In general any resource in any node/server (app server, db server, k8s sever etc) having resource scarcity the resource may hung/hit-performance-issue or terminate. It’s applicable for k8s resource as well.
  • More over if a node/server, cpu or memory utilization reaches 100% due to its resource’s utilization, node may evict any time. So we need to avoid such cases by, keep monitoring the server & its resource utilization and then take the the necessary action.
  • If the k8s cluster is in cloud, then we can incorporate ASG to automatically add additional node’s to handle k8s resources. (something similar to aws fargate). Or auto extend memory & cpu as and when required.
    hope this helps.

Uğurcan Çaykara:
Thanks bro, actually this was what I want, not links to resources but thoughts. Thanks again