Hi I would like i to understand the rationale when one chooses a AWS instance for K8s setup for following questions?
Guide me to understand this math with some assumptions of Traffic ( assume it’s a payment application where the traffic (TPS) is 10)
Highly appreciated to narrate this question response
For some of this, you’re going to need to measure your application pods to see how much resources they use. There are limits to pods (IIRC around 100 per node?) but if you don’t know how much resources they need, those limits aren’t relevant to the calculation. And the number of nodes in a cluster isn’t really a relevant number to this calculation in isolation of how much load your pods represent.
It’s also worth measuring how long it takes a pod to handle a transaction, since you don’t really know the load if if you don’t know how long it takes a pod to handle a request.
I think you’ll need to some experiments to figure this out; you won’t be able to do this as a simple “back of the envelope” calculation.
1 Like
Thanks rob for the response, i am beginning of the my kubernetes course. I would like to know how architect designs their applications and consider this cluster what instance they need , organise their capacity of the pods. I could not find a better link with a sample to figure the calculation sampling for the application to understand deeper & better
Probably a start would be learning about the metrics server. If you’re in the CKA course you’ll cover this. If you’re not, try installing the metrics server and playing around with kubectl top node
and kubectl top pods
and see what they do. I assume you have a local cluster to play around with – this is a good way to start out to understand the scaling problem.
1 Like
Thanks rob for the response, currently i have enrolled in this Kubernetes for the Absolute Beginners - Hands-on Tutorial. I just have started this couple of days, later i will do CKA course. Yes i have local cluster setup on my machine, will try installing the metrics server to understand better.
Nodes per cluster - limited only by the number of IP addresses in the networks you deploy nodes to.
Pods per node. There are a couple of considerations. Generally it’s around 100, however on AWS, if you use the AWS CNI for pod networking, then the pods are assigned addresses from the VPC network. This means that the number of pods per node is limited by the number of ENIs that your chosen EC2 supports, and the number of IP addresses that each ENI can be assigned. So if an EC2 instance supports 4 ENIs and each ENI supports 16 addresses, that gives you 63 pods per node, plus one IP for the node itself. So each node will consume 64 IPs from the AWS subnet, meaning that you must use very large subnets.