Helm test pods OOMKilled

I have deployed many application pods in multi worker node Kubernates setup. When I run helm test all the helm test pods present in different worker are in OOMKilled state?
Any suggetion how to fix or debug this?


OOMKilled state means pod used more memory than allowed. Review the request memory and CPU and deploy again



Can we use memoryQoS to handle OOMError by throttling memory? if so, can you help me how to do it in AWS EKS?

You can’t “throttle” memory. If a pod tries to use more than its configured limit, it will be OOMKilled. This is how the underlying Cgroup works. Cgroup is a feature of Linux itself, not Kubernetes. Kubelet simply leverages this operating system functionality when creating containers.

1 Like