How can I change the number of gunicorn workers in Tutor

Hi,
i am running Tutor in an LXC Container with 6GB of RAM.
This leads to situations where Tutor is using all the RAM and CPU.
I would like to reduce the number of gunicorn workers.
How can this be achieved?
Are there other ways to reduce the memory consumption?

Kind regards
Florian

I figured out that the problem is the amount of celery workers.
Is there a way to set it in the configuration?

The underlying issue is that the amount of cpus of the host is 20 but the amount of cpus of the container is 2.

When I first started setting up Tutor I also had a similar issue with resources, if you have too many CPU cores then it will overload the RAM if you don’t have enough. I first tried with 8 cores and 8GB RAM but that was just getting eaten up very fast. Re-building images also had a tendency to fail. I ended up settling on 16GB RAM and 3 cores which has been rock solid since.

I haven’t actually tried this before, not sure if it’s even still a valid configuration - but maybe it helps, there’s a config you can specify the maximum workers. The default (source) is 4 * the number of cores for the LMS, and 2 * the number of cores for the CMS, meaning on a 20-core system you have 120 workers total

You can use:
OPENEDX_LMS_UWSGI_WORKERS
OPENEDX_CMS_UWSGI_WORKERS

See the docs in tutor:

Hi I found the following ticket now:

Setting the configuration value --concurrency=1 is exactly what I need.

I found a solution by creating a plugin with the following content:

from tutor import hooks


hooks.Filters.CMS_WORKER_COMMAND.add_items(
    [
        "--concurrency=1"
    ]
)

hooks.Filters.LMS_WORKER_COMMAND.add_items(
    [
        "--concurrency=1"
    ]
)
1 Like

Hi Florian,

In order to reduce Tutor’s memory and CPU usage in your LXC container, you can reduce the number of Gunicorn workers. This is done by setting the WORKERS environment variable in your config.yml or by using tutor config save --set CMS_WORKERS= and LMS_WORKERS= as needed. Reducing workers reduces concurrent processes and resource use. Additionally, you may consider turning off plugins you don’t use, reducing Docker service replicas, and limiting service restart policies.

You can monitor process memory and CPU usage with htop, or docker stats to get metrics you can use to finish optimizing to suit your needs.