Server CPU and RAM recommendation for k8s worker nodes?

Hi guys,

I’m using Tutor k8s. I’m planning to set up a system to serve 10000 users.
I’m using AWS EKS. I wonder what type of EC2 to choose as worker nodes.
I’m thinking about t3a.large 2CPU, 8Gb RAM. Should I use 4CPU instead?
And can anyone confirm Tutor k8s does not work with AWS Graviton Processors?

Thanks!

It can really vary a lot depending on what type of courses you have and what type of users. For example, doing some types of final exams can cause huge spikes in load, which courses without those types of finals do not need to worry about. Or, courses that users complete on their own schedule from many different timezones around the world are very different from synchronous scheduled courses with learners from a single geographic area. Auto-graders can have different load patterns than not. and so on.

I would say to try the 2CPU first, and if you find that you are CPU-bound, you can always easily deploy some larger nodes. One of the main benefits of using k8s is supposed to be that it’s easy to scale and adjust your resources to match your load.

Officially Tutor only offers “experimental” support ARM. As far as I know, it should work, and I’ve seen some tests done on Graviton processors in particular. In addition to the docs I just linked to, see this plugin for an alternative starting point, which you can use to build and deploy your own images.

Thank @braden , I guess I have to monitor and adjust accordingly.